Modeling exploratory search as a knowledge-intensive process

setembro 9, 2018 § Deixe um comentário


Abstract: Searching as Learning and Information Seeking require exploratory search to be modeled for supporting learning. The present paper introduces a model of exploratory search that was applied on web searching in language teacher education, which promoted its evolution and validation, and enabled a visualization of search pattern and learning process. This model was able to help clarify best practices associated to users’ decision-making process regarding suitable and not suitable information and to capture the relevance of context variables, personal skills and expertise that users utilize as filters for the search.

Tibau, M., Siqueira, S.W.M., Nunes, B.P., Marenzi, I., Bortoluzzi, M.: Modeling exploratory search as a knowledge-intensive process. In: 2018 Proceedings of the 18th IEEE International Conference on Advanced Learning Technologies (ICALT 2018), Mumbai. IEEE, New York (2018).

DOI: 10.1109/ICALT.2018.00015


A summarization of Rio de Janeiro’s 2018 summer

abril 13, 2018 § Deixe um comentário

This summarization is an adaptation from Edward Tufte’s illustration displayed at his “Visual Display of Quantitative Information” book. The original illustration comes from The New York Times (Fig 1).


Fig 1: Edward Tufte’s illustration of New York City’s 2003 weather

Mine’s was created using R Programming packages dplyr and tidyr to preprocess and summarize a dataset collected from Average Daily Temperature archive website provided by University of Dayton.
The chart per se (Fig 2) was created using package ggplot2. Temperature is in Fahrenheit.


Fig 2: Adaptation from the original Tufte’s chart to Rio de Janeiro’s weather (summer 2018)

In my adaptation, the time series in light brown represents the average temperatures (max and min) from 1995 to 2017, while the dark brown represents the mean temperature for each day along with a 95% confidence interval.
From analyzing the chart is possible to see that from January 1st to March 20 2018, we had 34 days in Rio de Janeiro as the hottest since 1995 and 1 day as the coldest. The period represented accounts for the South Hemisphere summer.

Exploratory analysis and regression model helping fight Zika

março 2, 2017 § Deixe um comentário


Alerta Zika! was a collaborative event to explore the potential of data and technology to improve responses to the Zika virus (more information here). The Inter-American Development Bank organized it with the support of several partners including Rio de Janeiro City Hall and some major Universities based in the city. From December 2nd to 3rd 2016, about 10 registered teams explored the epidemiological, environmental and social factors to understand and explain the progress of this disease. It was one and a half day of hard work to sum up on the efforts to fight the Zika disease in Rio de Janeiro. We gain access to the dataset with all the cases of Zika, Dengue and Chikungunya registered in Rio de Janeiro city during 2015 and 2016. In order to know the data, our team started to ‘play’ with the dataset and check the variables. In doing so, we fancied about the Zika’s evolution pattern and its role during the outbreak at the early months of 2016.

Our hypothesis was that the disease propagation pattern and their correlations throughout time, city areas and weather could be used as an indicator to show where and when the disease spreads and help the city officials decide the best ways to allocate resources. We set as our goal then, to create a Rio de Janeiro map with a historical evolution of the Zika disease throughout time and temperature. During several conversations with representatives from the municipal health secretary, we wondered whether a social development indicator could provide insights about the spreading pattern. We decided to include HDI (Human Development Indicator) – known in Brazil as IDH – as a social parameter.

We then defined as our target variables the coordinates (latitude and longitude), the dates that the cases occurred, temperature over the seasons and social development indicators of Rio regarding income, education level and longevity. We set as our preliminary tasks the creation of a grid comprising the Rio de Janeiro city map and a data frame to aggregate the variables subset from different datasets. Our first goal as we performed an exploratory analysis was to explore the shape of the distributions. The grid helped us to check where the cases were located; exploring an area of about 400 meters, which is the mosquito range, as well as to cluster the patients’ cases in broader areas.  It allowed us too to check how far the disease spread throughout the city and to identify the areas where most of the cases took place.

Performing a time-series analysis, we were able to identify a correlation between temperature and number of cases. In this point, to understand the mosquito life cycle is valuable. The aedes aegypti flourish in a temperature variation going from 23-Celsius degree to 28-Celsius degree (about 73 to 82-Fahrenheit). A few degrees below or above this threshold does not necessary kill the mosquito but makes the environment more uncomfortable to its development hence retarding its evolution. From the egg to inoculate the virus in an individual, there’s a 20 to 25 days period, so the previous month mosquito is responsible for the current month patient. As it can be seen at the plots bellow, comparing the disease cases through the city by month and the temperature variation per day of the previous month, the outbreak during March and April (plots 3 and 4) follows a perfect condition observed throughout February and March, where the 23 to 28-Celsius threshold was observed during most of the days. The red circles correspond to the areas with the majority of cases.

Plot 1     


Plot 2


Plot 3


Plot 4


Plot 5


Plot 6


Plot 7


This led us to our first meaningful insight: the temperature from the previous month seems to affect the number of cases in the current month.

As we shift our attention to the social indicator data at hand, we were able to identify a curious behavior. Some critical areas during the outbreak shared a similarity of low IDH coefficient. The plot bellow provide a visual support. The orange circle sizes are relate to the level of IDH, smaller circle/lower IDH and vice-versa.

Plot 8


The highlighted areas on the plots above correspond to Maré (a neighborhood), the far-north zone and the far-west zone of the city. These areas share a lower level at the social indicators comparing to other parts of the municipality.

Although income seems not to be a social influence affecting the outbreak – as can be seen at the plot bellow – comparing the south zone behavior, the wealthiest part of the city, to the cases in March and April indicate, there is a peculiarity to consider. In this particular area, there is a huge economic disparity. The most exclusive addresses are placed within walking distance to some favelas (slums usually placed on hills around the area), where the IDH are similar to those on the previous plot).

Plot 9


From this observation, we draw a second meaningful insight: the social indicators (IDH) seems to count as an influence force in the areas with most number of cases during the outbreak.

As we went further on our analysis, other curious behavior caught our attention. Even as the temperature dropped away from the 23 to 28-Celsius threshold, some areas kept appearing as the top score case holders (as it can be seen in the comparisons bellow from May to July).

Plot 10



What these areas have in common is that woods and forests surround them all. This common factor provided the third meaningful insight we delivered: some recurrent disease focus areas seems to grow around or close to woods and forests areas.

Exploratory analysis usually is a good start to predictive modeling because helps to understand a little further the datasets and to summarize their main characteristics. Explore the data and formulate hypotheses that could lead to new data collection and experiments is a major component to extract usable information from data; suggest hypotheses; and support the selection of appropriate statistical tools and techniques. Our main goal at the data expedition were to set a first step that could help to understand the past behavior in order to prepare the ‘seeds’ to a future ‘crop’. Our third place award was a source of pride to ourselves and seems to indicate that this goal was accomplished.

After the Data Expedition

We continue exploring the data and aggregating other variables. Our goal was get some predictive model that could add on the initial exploratory analysis. These new variables were population per neighborhood and rainfall. We also add more data about temperature regarding the final months of 2016 and early january 2017.

The first choice was a simple linear regression using the variable population per neighborhood to predict cases based on population. Below some code chunks in R and statistical readings (we intend to show more info in a markdown file – a type of file where can be shown text, code and plots together).

The model:


A quickly view of the dataset:


“bairro” stands for neighborhood; “casos_zika” for Zika cases; and “populacao” for population.

Some statistical Reading from Rstudio console:


Diagnostic Plots

In plot 1 (Residuals vs. Fitted), at some point there’s equally spread residuals around a horizontal line, but also there are outliers. In plot 2 (Normal Q-Q) the residuals seems to be normally distributed, at least at some extend.




Plot 2



In plot 3 (Scale-Location), complementing plot 1, some residuals are spread equally along the ranges of predictors showing some homoscedasticity. Plot 4 (Residuals Vs. Leverage) identified the influential observation as #120 and #23.

Plot 3



Plot 4


Based on the thesis that the mosquito has a faster cycle when there’s a temperature threshold between 23 and 28 degrees Celsius, we tried to check if rainfall also helps in the proliferation. Then, we tried to identify the relationship between these two variables and the number of Zika cases. Our second choice was to use a multiple regression model to meet this goal. This analysis were performed in Phython.

The chart below shows that the months with the highest incidence of the temperature threshold are those between December 2015 and April 2016. We could also notice that trend occurring again in December 2016 and early January 2017.



Green and red lines: temperature threshold

Pink and grey lines: min and max temperature

The next sequence of plots show that there is a similar positive trend between the curves showing the number of cases per week, the temperature and rainfall. The analysis was performed based on the neighborhoods of Campo Grande (1), Santa Cruz (2), and Guaratiba(3), that were severed affected during the 2016 outbreak.

Blue line: cases per week

Red line: temperature under the threshold

Yellow line: rainfall







We decided to use the multiple regression model to build a predictive application. We tested the model through a series of plots comparing the actual data with a predicted one applied in a test dataset used to fit the model.

Testing values Vs. Predicted values for Rio de Janeiro


Green dots: testing values

Gray dots: predicted values

Real Cases Vs. Predicted Cases for Rio de Janeiro


Green lines: Real Cases

Gray lines: Predicted Cases

Real Cases Vs. Predicted Cases Comparison for Rio de Janeiro – december 2015 & 2016

download (1).png

Green: Real Cases

Gray: Predicted Cases

In this particular case (above plot) we didn’t had available the number of actual Zika cases in December 2016, so we only predicted the number of cases.

Analysis per neighborhood: Campo Grande.

Statistical Readings from Jupyter Notebook console:


Green line: real cases

Gray line: predicted cases


Analysis per neighborhood: Santa Cruz.

Statistical Readings


Green line: real cases

Gray line: predicted cases


Analysis per neighborhood: Guaratiba.

Statistical Readings


Green lines: testing values

Gray lines: predicted values


We created a prototype to apply the model. It’s a website with information about the number of Zika cases per month and graphics showing the actual cases and the predicted ones per neighborhood.

For those who would like to check it out, it’s available here.

Exploratory data analysis and baseball (aka Moneyball)

janeiro 3, 2017 § Deixe um comentário


Executive Summary

In any professional sports, how well the teams spend their money means more than the difference between a championship and a flop. It’s no different with baseball, the sport that introduces the concepts of professionalism and moneyball.

For those who are not used to the term, moneyball is used to describes baseball operations in which a team endeavors to analyze the market for baseball players and buy who is undervalued and sell who is overvalued. Unlike a common misconception, it is not about on-base percentage (a measure of how often a batter reaches base for any reason other than a fielding error, fielder’s choice, dropped/uncaught third strike, fielder’s obstruction, or catcher’s interference), but to explore methods of rating players.

It is most commonly used to refer to the strategy used by the front office of the 2002 Oakland Athletics, with approximately US$44 million in salary, were competitive with larger market teams such as the New York Yankees, who spent over US$125 million in payroll that same season. It derives its name from the 2003 book from Michael Lewis about the team’s analytical, evidence-based, sabermetric approach. Suffice to say that there is also a 2011 motion picture of the same name, based on the book, starring Brad Pitt and Jonah Hill, for which the term became mainstream.

The data

I will be using data from two very useful databases on baseball teams, players and seasons. One is curated by Sean Lahman, available at The other, is from the nutshell package, which contains data sets used as examples in the book “R in a Nutshell” by Joseph Adler. More information about the package is available at

The reason for pick two different datasets instead of one is because I wanted to perform the analysis in different sources. The decision proved right for account of speed and practicality too. The Lahman data set uses data on pitching, hitting and fielding performance and other tables from 1871 through 2015. As we can see, is thoroughly and updated. The Nutshell’s on the other hand, is better designed for learning approaches (at least in my opinion) and comprises statistical data from 2000 – 2008 for every Major League Baseball team.

For those who are not familiar with baseball, a few points of explanation are important:

  • Major League Baseball is a professional baseball league, where teams pay players to play baseball (I know it sounds silly and redundant, but I have to be sure everybody knows what we are talking about here).
  • The goal of each team is to win as many games out of a 162 game season as possible. This allows a ticket to the post season and a chance to play at the World Series, where the champion is defined.
  • Teams win games by scoring more runs than their adversary. A run is computed when a player advances around first, second and third base and returns safely to home plate (in other words, do a round around the infield).
  • In principle, better players are expensive, so teams that want good players need to spend more money.
  • Teams that spend the most, frequently won the most (not always but so often that is fair to consider it a case of cause and effect).


I provide the analysis in both data sets in a Markdown page that can be accessed @marcelo_tibau/exploratory-and-baseball

An application

One of the reasons that I chose the nutshell data set is because it is used as a case study from the book “R in a Nutshell” by Joseph Adler. Inspired by this case, I developed a simple app to predicts the number of runs scored by a team based on a linear model which predicts the number of runs scored by a team. For those curious to see it, a demo for the app can be found @baseball-prediction

Alerta Zika! Data Expedition

dezembro 8, 2016 § Deixe um comentário

On the behalf of my teammates Benjamin Alves and Cristiano Franco, as well as myself, I would like to thank the Inter-American Development Bank for the 3rd place awarded to our team at the “Alerta Zika” data expedition. More than the prize itself our greatest proud was to be able to provide three insights to the municipal health secretary and sum up on the efforts to fight the Zika disease in Rio de Janeiro.


Educational data mining and learning analytics

novembro 21, 2016 § Deixe um comentário


There’s a song by Leonard Cohen that states “everybody knows” and “that’s how it goes”. The same goes for the fact that the amount of data online activities generate is skyrocketing. This is true because more and more of our commerce, entertainment, and communication are occurring over the Internet and despite concerns about globalization and information accuracy, it’s a trend that is impossible to curb. Like a steamrolling, this data tsunami touches us all, so it’s more than natural that it also catches education. With analytics and data mining experiments in education starting to proliferating, sorting out fact from fiction and identifying research possibilities and practical applications becomes a necessity.

Educational data mining and learning analytics work based on assumption of patterns and prediction. Both disciplines are used to research and build models in several areas that influence online learning systems. The bottom-line here is if we can discern the pattern in the data and make sense of what is going on, we can predict what should come next and take the appropriate action. The business world name it insight and it’s the difference of make “big bucks” or be caught unprepared. So believe me, it’s valuable.

Data mining with educational purposes can be used basically in two big areas. One is user modelling, which encompasses what a learner knows, what a learner’s behavior and motivation are, what the user experience is like, and how satisfied users are with online learning. Well, the same kind of data used to model can be used to profile users. Profiling means grouping similar users into categories using salient characteristics. These categories then can be used to offer experiences to groups of users or to make recommendations individually and proceed adaptations to how an online learning system performs.

A little explanation it’s needed at this point: online learning systems refer to online courses or to learning software or interactive learning environments that use intelligent tutoring systems, virtual labs, or simulations. They may be offered through a learning or course management system and through a learning platform. When online learning systems use data to change in response to student performance, they become adaptive learning environments.

Increasing use of online learning offers some opportunities, such as to integrate assessment and learning and gather information in nearly real time, to improve future instruction. This process goes like this: as students work, the system captures their inputs, collecting evidence of activities, knowledge, and strategy used. Everything counts here, the information each student selects or inputs, the number of attempts the student makes, the allocation of time across parts of the process, and the number of hints and feedback given.

As students can benefit from detailed learning data, so the broader education community can thrive from an interconnected feedback system – such as what works better for a particular content and how to stimulate necessary skills like metacognition. As put by the U.S. Department of Education in a 2010 report (National Education Technology Plan – NETP, 2010a, p. 35): “The goal of creating an interconnected feedback system would be to ensure that key decisions about learning are informed by data and that data are aggregated and made accessible at all levels of the education system for continuous improvement”.

As it’s expected that these learning systems be able to exploit in detail  activity data from learners to recommend what the next activity should be, and also to predict how a particular student will perform in future learning activities, being able to connect the dots and produce insights presents itself as a necessity. It’s precisely here that enters data mining and learning analytics.

Understanding big data

Although using data to enhance decision processes is not new – they are used in what is known as business intelligence or analytics – it’s a relatively new approach concerning education. As their business counterparts, learning analyses can discern historical patterns and trends from data and create models that predict future trends and patterns and comprise applied techniques from computer science, mathematics, and statistics in order to extract usable information from very large datasets.

Usually, data are stored into a structured format, which are easy for computers to manipulate. However, the data gathered from learning platforms have a semantic structure that is difficult to discern computationally without human aid, hence is called unstructured data (e.g. texts or images). To analyze these events is required techniques that work with unstructured text and image data and data from multiple sources. When these data comprise a vast amount, we have the famous big data. It’s important to understand that big data does not have a fixed size, it’s a concept. As any given number assigned to define it would change as computing technology advances to handle more data, big data is defined relative to current capabilities.

Big data, educational data mining and learning analytics

The big amount of data snared from online behavior feeds algorithms and enables them to infer the users’ knowledge, intentions, and interests and to build models that can predict future behavior and interest. In order to achieve this goal data mining and analytics are applied as the fields of educational data mining and learning analytics. Although there is no hard distinction between these two, they have had different research histories and distinct research areas.

In general, educational data mining (also known as EDM) looks for new patterns in data and develops new algorithms and models, using statistics, artificial intelligence, and (of course) data mining to analyze the data collected during teaching and learning. Learning analytics, for instance, applies known predictive models in instructional systems, using different knowledge, such as information science, sociology and psychology, as well as statistics, AI, and data mining in order to influence educational practice.

Educational data mining

Diving a little bit into the subject, the need for understanding how students learn is the major force behind educational data mining. The suite of computational and psychological methods and research approaches supported by interactive learning methods and tools, such as intelligent tutoring systems, simulations, games, have opened up opportunities to collect and analyze student data and to discover patterns and trends in those data. Data mining algorithms help find variables that can be explored for modelling and by applying data mining methods that classify data and find relationships, these models can be used to change what students experience next or even to recommend outside academic assignments to support their learning.

An important feature of educational data is that they are hierarchical. All the data (from the answers, the sessions, the teachers, the classrooms, etc.) are nested inside one another. Grouping it by time, sequence, and context provide levels of information that can show the impact of the practice sessions length or the time spent to learning – as well as how concepts build on one another and how practice and tutoring should be ordered. Providing the right context to these information help to explain results and to know where the proposed instructional strategy works or not. The methods that have been important to stimulate developments in mining educational data are those related:

1) To prediction, for understanding what behaviors in an online learning environment, such as participation in discussion forums and taking practice tests, can be used to predict outcome such as which students might fail a class. It helps to develop models that provide insights that might help to better connect procedures or facts with the specific sequence and amount of practice items that best stimulate the learning. It also helps to forecast or understand student educational outcomes, such as success on posttests after tutoring.

2) To clustering, meaning to find data points that naturally group together and that can be used to split a full dataset into categories. Examples of clustering are grouping students based on their learning difficulties and interaction patterns, or grouping by similarity of recommending actions and resources.

3) To relationship, meaning discover relationships between variables in a dataset and encoding them as rules for later use. These techniques can be used to associate student activity (in a learning management system or discussion forums) with student grades, to associate content with user types to build recommendations for content that is likely to be interesting or even to make changes to teaching approaches. This latter area, called teaching analytics, is of growing importance and key to discover which pedagogical strategies lead to more effective or robust learning.

4) To distillation, which is a technique that involves depicting data in a way that enables humans to quickly identify or classify features of the data. This area of educational data mining improves machine learning models by allowing humans to identify patterns or features easier, such as student learning actions, student behaviors or collaboration among students.

5) To model discovery, which is a technique that involves using a validated model (developed through such methods as prediction or clustering) as a component in further analysis. Discovery with models supports discovery of relationships between student behaviors and student characteristics or contextual variables, analysis of research questions across a wide variety of contexts, and integration of psychometric modeling into machine learned models.

Learning Analytics

Learning analytics emphasizes measurement and data collection as activities necessary to undertake, understand, analyze and report data with educational purposes. Unlike educational data mining, learning analytics generally does not emphasize reducing learning into components but instead seeks to understand entire systems and to support human decision making. Draws on a broad array of academic disciplines, incorporating concepts from information science, computer science, sociology, statistics, psychology, and learning sciences.

The goal is to answer important questions that affect the way students learn and help us to understand the best way to improve organizational learning systems. Therefore, it emphasizes models that could answer questions such as:

  • When are students ready to move on to the next topic?
  • When is a student at risk for not completing a course?
  • What is the best next course for a given student?
  • What kind of help should be provide?

As a visual representation of analytics is critical to generate actionable analyses, the information is often represented as “dashboards” that show data in an easily digestible form. Although the methods used in learning analytics are draw from those used in educational data mining, it may employ additionally social network analysis (to determined student-to-student and student-to-teacher relationships and interactions that help to identify disconnected students, influencers, etc.) and social metadata to determine what a user is engaged with.

As content moves online and mobile devices for interacting with content enable a 24/7 access, understand what data reveal can lead to fundamental shifts in teaching and learning systems as a whole. Learners and educators at all levels can draw benefits from understanding the possibilities of the use of big data in education. Data mining and learning analytics are two powerful tools that can help shape the future of human learning.


[1] Anaya, A. R., and J. G. Boticario. 2009. “A Data Mining Approach to Reveal Representative Collaboration Indicators in Open Collaboration Frameworks.” In Educational Data Mining 2009: Proceedings of the 2nd International Conference on Educational Data Mining, edited by T. Barnes, M. Desmarais, C. Romero, and S. Ventura, 210–219.

[2] Amershi, S., and C. Conati. 2009. “Combining Unsupervised and Supervised Classification to Build User Models for Exploratory Learning Environments.” Journal of Educational Data Mining 1 (1): 18–71.

[3] Arnold, K. E. 2010. “Signals: Applying Academic Analytics. EDUCAUSE Quarterly 33 (1).

[4] Bajzek, D., J. Brooks, W. Jerome, M. Lovett, J. Rinderle, G. Rule, and C. Thille. 2008. “Assessment and Instruction: Two Sides of the Same Coin.” In Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2008, edited by G. Richards. Chesapeake, VA: AACE, 560–565.

[5] Baker, R. S. J. d. 2011. “Data Mining for Education.” In International Encyclopedia of Education, 3rd ed., edited by B. McGaw, P. Peterson, and E. Baker. Oxford, UK: Elsevier.

[6] Baker, R. S. J. d., A.T. Corbett, and V. Aleven. 2008. “More Accurate Student Modeling Through Contextual Estimation of Slip and Guess Probabilities in Bayesian Knowledge Tracing.” In Proceedings of the 9th International Conference on Intelligent Tutoring Systems. Berlin, Heidelberg: Springer-Verlag, 406–415.

[7] Baker, R. S. J. d., A.T. Corbett, K. R. Koedinger, and I. Roll. 2006. “Generalizing Detection of Gaming the System Across a Tutoring Curriculum.” In Proceedings of the 8th International Conference on Intelligent Tutoring Systems. Berlin, Heidelberg: Springer-Verlag, 402–411.

[8] Baker, R. S., A. T. Corbett, K. R. Koedinger, and A. Z. Wagner. 2004. “Off-Task Behavior in the Cognitive Tutor Classroom: When Students ‘Game the System.’” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’04). New York, NY: Association for Computing Machinery, 383–390.

[9] Baker, R. S. J. d., S. M. Gowda, and A. T. Corbett. 2011. “Automatically Detecting a Student’s Preparation for Future Learning: Help Use Is Key.” In Proceedings of the 4th International Conference on Educational Data Mining, edited by M. Pechenizkiy, T. Calders, C. Conati, S. Ventura, C. Romero, and J. Stamper179–188.

[10] Baker, R. S. J. D., and K. Yacef. 2009. “The State of Educational Data Mining in 2009: A Review and Future Visions.” Journal of Educational Data Mining 1 (1): 3–17.

[11] Balduzzi, M., C. Platzer, T. Holz, E. Kirda, D. Balzarotti, and C. Kruegel. 2010. Abusing Social Networks for Automated User Profiling. Research Report RR-10-233 – EURECOM, Sophia Antipolis; Secure Systems Lab, TU Wien and UCSB.

[12] Beck, J. E., and J. Mostow. 2008. “How Who Should Practice: Using Learning Decomposition to Evaluate the Efficacy of Different Types of Practice for Different Types of Students.” In Proceedings of the 9th International Conference on Intelligent Tutoring Systems.

[13] Bienkowski, Marie; Feng, Mingyu; Means, Barbara. Enhancing Teaching and Learning Through Educational Data Mining and Learning Analytics: An Issue Brief. Center for Technology in Learning. SRI International. 2012.

[14] Blikstein, P. 2011. “Using Learning Analytics to Assess Students’ Behavior in Open-Ended Programming Tasks.” Proceedings of the First International Conference on Learning Analytics and Knowledge. New York, NY: Association for Computing Machinery, 110–116.

[15] Brown, W., M. Lovett, D. Bajzek, and J. Burnette. 2006. “Improving the Feedback Cycle to Improve Learning in Introductory Biology Using the Digital Dashboard.” In Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2006I, edited by G. Richards. Chesapeake, VA: AACE, 1030–1035.

[16] Corbett, A. T., and J. R. Anderson. 1994. “Knowledge Tracing: Modeling the Acquisition of Procedural Knowledge.” User Modeling and User-Adapted Interaction 4 (4): 253–278.

[17] Crawford, V., M. Schlager, W. R. Penuel, and Y. Toyama. 2008. “Supporting the Art of Teaching in a Data-Rich, High-Performance Learning Environment.” In Data-Driven School Improvement, edited by E. B. Mandinach and M. Honey. New York, NY: Teachers College Press, 109–129.

[18] Dawson, S., L. Heathcote, and G. Poole. 2010. “Harnessing ICT Potential: The Adoption and Analysis of ICT Systems for Enhancing the Student Learning Experience.” International Journal of Educational Management 24 (2): 116–128.

[19] EDUCAUSE. 2010. Next Generation Learning Challenges: Learner Analytics Premises

[20] Elias, T. 2011. Learning Analytics: Definitions, Processes and Potential

[21] Feng, M., N. T. Heffernan, and K. R. Koedinger. 2009. “User Modeling and User-Adapted Interaction: Addressing the Assessment Challenge in an Online System That Tutors as It Assesses.” The Journal of Personalization Research (UMUAI journal) 19 (3): 243–266.

[22] Gerhard, F. 2001. “User Modeling in Human-Computer Interaction.” User Modeling and User-Adapted Interaction 11: 65–86.

[23] Goldstein, P. J. 2005. Academic Analytics: The Use of Management Information and Technology in Higher Education. EDUCAUSE Center for Applied Research.

[24] Graf, S., and Kinshuk. In press. “Dynamic Student Modeling of Learning Styles for Advanced Adaptivity in Learning Management Systems.” International Journal of Information Systems and Social Change.

[25] Hamilton, L., R. Halverson, S. Jackson, E. Mandinach, J. Supovitz, and J. Wayman. 2009. Using Student Achievement Data to Support Instructional Decision Making (NCEE 2009-4067). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.

[26] Jeong, H., and G. Biswas. 2008. “Mining Student Behavior Models in Learning-by-Teaching Environments.” In Proceedings of the 1st International Conference on Educational Data Mining, Montréal, Québec, Canada,127–136.

[27] Johnson, L., A. Levine, R. Smith, and S. Stone. 2010. The 2010 Horizon Report. Austin, TX: The New Media Consortium.

[28] Johnson, L., R. Smith, H. Willis, A. Levine, and K. Haywood. 2011. The 2011 Horizon Report. Austin, TX: The New Media Consortium.

[29] Kardan, S., and C. Conati. 2011. A Framework for Capturing Distinguishing User Interaction Behaviours in Novel Interfaces. In Proceedings of the 4th International Conference on Educational Data Mining, edited by M. Pechenizkiy, T. Calders, C. Conati, S. Ventura, C. Romero, and J. Stamper159–168.

[30] Köck, M., and A. Paramythis. 2011. “Activity Sequence Modeling and Dynamic Clustering for Personalized E-Learning. Journal of User Modeling and User-Adapted Interaction 21 (1-2): 51–97.

[31] Koedinger, K. R., R. Baker, K. Cunningham, A. Skogsholm, B. Leber, and J. Stamper. 2010. “A Data Repository for the EDM Community: The PSLC DataShop.” In Handbook of Educational Data Mining, edited by C. Romero, S. Ventura, M. Pechenizkiy, and R.S.J.d. Baker. Boca Raton, FL: CRC Press, 43–55.

[32] Koedinger, K., E. McLaughlin, and N. Heffernan. 2010. “A Quasi-experimental Evaluation of an On-line Formative Assessment and Tutoring System.” Journal of Educational Computing Research 4: 489–510.

[33] Lauría, E. J. M., and J. Baron. 2011. Mining Sakai to Measure Student Performance: Opportunities and Challenges in Academic Analytics

[34] Long, P. and Siemens, G. 2011. “Penetrating the Fog: Analytics in Learning and Education.” EDUCAUSE Review 46 (5).

[35] Lovett, M., O. Meyer, and C. Thille. 2008. The Open Learning Initiative: Measuring the Effectiveness of the OLI Statistics Course in Accelerating Student Learning.” Journal of Interactive Media in Education Special Issue: Researching Open Content in Education. 14.

[36] Macfayden, L. P., and S. Dawson. 2010. “Mining LMS Data to Develop an ‘Early Warning’ System for Educators: A Proof of Concept.” Computers & Education 54 (2): 588–599.

[37] Manyika, J., M. Chui, B. Brown, J. Bughin, R. Dobbs, C. Roxburgh, and A. H. Byers. 2011. Big Data: The Next Frontier for Innovation, Competition, and Productivity. McKinsey Global Institute.

[38] Martin, B., A. Mitrovic, K. Koedinger, and S. Mathan. 2011. “Evaluating and Improving Adaptive Educational Systems with Learning Curves.” User Modeling and User-Adapted Interaction 21 (3): 249–283.

[39] Means, B., C. Chelemer, and M. S. Knapp (eds.). 1991. Teaching Advanced Skills to at-Risk Students: Views from Research and Practice. San Francisco, CA: Jossey-Bass.

[40] Merceron, A., and K. Yacef. 2010. “Measuring Correlation of Strong Symmetric Association Rules in Educational Data.” In Handbook of Educational Data Mining, edited by C. Romero, S. Ventura, M. Pechenizkiy, and R. S. J. d. Baker. Boca Raton, FL: CRC Press, 245–256.

[41] New Media Consortium. 2012. NMC Horizon Project Higher Ed Short List. Austin, TX: New Media Consortium.

[42] O’Neil, H. F. 2005. What Works in Distance Learning: Guidelines. Greenwich CT: Information Age Publishing.

[43] Reese, D. D., R. J. Seward, B. G. Tabachnick, B. Hitt, A. Harrison, and L. McFarland. In press. “Timed Report Measures Learning: Game-Based Embedded Assessment.” In Assessment in Game-Based Learning: Foundations, Innovations, and Perspectives, edited by D. Ifenthaler, D. Eseryel, and X. Ge. New York, NY: Springer.

[44] Ritter, S., J. Anderson, K. Koedinger, and A. Corbett. 2007. “Cognitive Tutor: Applied Research in Mathematics Education.” Psychonomic Bulletin & Review 14 (2): 249–255.

[45] Romero C. R., and S. Ventura. 2010. “Educational Data Mining: A Review of the State of the Art.” IEEE Transactions on Systems, Man and CyberneticsPart C: Applications and Reviews 40 (6): 601–618.

[46] Siemens, G., and R. S. J. d. Baker. 2012. “Learning Analytics and Educational Data Mining: Towards Communication and Collaboration.” In Proceedings of LAK12: 2nd International Conference on Learning Analytics & Knowledge, New York, NY: Association for Computing Machinery, 252–254.

[47] U.S. Department of Education. 2010a. National Education Technology Plan

———. 2010b. Use of Education Data at the Local Level: From Accountability to Instructional Improvement. Washington, DC: U.S. Department of Education.

———. 2010c. Basic Concepts and Definitions for Privacy and Confidentiality in Student Education Records. SLDS Technical Brief 1. NCES 2011-601. Washington, DC: U.S. Department of Education.

———. 2012a. December 2011- Revised FERPA Regulations: An Overview for SEAS and LEAS. (PDF file). Washington, DC: U.S. Department of Education.

———. 2012b. The Family Educational Rights and Privacy ActGuidance for Reasonable Methods and Written Agreements (PDF file). Washington, DC: U.S. Department of Education.

[48] U.S. Department of Education, Office of Educational Technology, Enhancing Teaching and Learning Through Educational Data Mining and Learning Analytics: An Issue Brief, Washington, D.C., 2012.

[49] VanLehn, K., C. Lynch, K. Schulze, J. A. Shapiro, R. H. Shelby, L. Taylor, D. Treacy, A. Weinstein, and M. Wintersgill. 2005. “The Andes Physics Tutoring System: Lessons Learned.” International Journal of Artificial Intelligence in Education 15 (3): 147–204.

[50] Viégas, F. B., M. Wattenberg, M. McKeon, F. Van Ham, and J. Kriss. 2008. “Harry Potter and the Meat-Filled Freezer: A Case Study of Spontaneous Usage of Visualization Tools.” In Proceedings of the 41st Annual Hawaii International Conference on System Sciences, 159.

[51] Wayman, J. C. 2005. “Involving Teachers in Data-Driven Decision Making: Using Computer Data Systems to Support Teacher Inquiry and Reflection.” Journal of Education for Students Placed At Risk 10 (3): 295–308.


Using regression model to explore relationships: the “Gasoline Mileage” example

novembro 17, 2016 § Deixe um comentário


If you are into statistics probably already know the importance of regression analysis to statistical modelling. If you are not, it is necessary to say that it is important stuff and is use for estimating the relationships among variables. There are many techniques and extensions for carrying out regression analysis such as linear regression, multivariate linear regression (also known as general linear model), some variances as Bayesian multivariate linear regression, least-squares and so on.

What these approaches have in common is an equation of the form y = a + bx, where x is the explanatory variable and y is the dependent variable. The slope of the line is b, and a is the intercept (the value of y when x = 0).

Harold V. Henderson and Paul F. Velleman provided a famous example of the use of a regression model in their paper “Building Multiple Regression Models Interactively”, published in 1981 by Biometrics magazine (to those whom are interested in read the original one, check

There they used what is known as the “Gasoline Mileage Data”, which became a dataset used around the world for educational purposes. The data were extracted from 1974 Motor Trend magazine and comprise gasoline mileage in miles per gallon (MPG), and ten aspects of automobile design and performance for 32 automobiles (1973-74 models). I explored this data using the dataset created for R programming called “mtcars”. As I believe that any analysis has to have a purpose, mine attempted to determine whether an automatic or manual transmission is better for MPG and to quantify the MPG difference.

In doing so, I composed the following paper with linear and multiple regression models and the codes to perform the modelling in R, as well as my personal analysis. The paper can be accessed at

Prediction Assignment – Practical Machine Learning

novembro 11, 2016 § Deixe um comentário

To those whom are eager to know more about Machine Learning and how it goes in a real life work, I share a paper I wrote with analysis, codes and algorithms of a Machine Learning Prediction Assignment. I wrote the codes in R, which is a statistical programming language. I also would like to thank PUC-Rio for providing the dataset that I worked.

Executive Summary

Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement – a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it. In this project, your goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways.

Data source

The data for this project came from the Human Activity Recognition study, conducted by Pontifícia Universidade Católica – Rio de Janeiro.

Ugulino, W.; Cardador, D.; Vega, K.; Velloso, E.; Milidiu, R.; Fuks, H. Wearable Computing: Accelerometers’ Data Classification of Body Postures and Movements. Proceedings of 21st Brazilian Symposium on Artificial Intelligence. Advances in Artificial Intelligence – SBIA 2012. In: Lecture Notes in Computer Science. , pp. 52-61. Curitiba, PR: Springer Berlin / Heidelberg, 2012. ISBN 978-3-642-34458-9. DOI: 10.1007/978-3-642-34459-6_6.

The paper

It can be accessed at:

O futuro da indústria tech

novembro 10, 2016 § Deixe um comentário


A indústria da tecnologia defendeu por anos o argumento de que para se ter uma economia baseada na inovação, era preciso que se estimulasse a educação, o conhecimento aplicado à propriedade intelectual e o multiculturalismo. Isto impactou diretamente em políticas públicas e legislação de diversos países em relação à organização e às metodologias de seu sistema educacional, proteção à propriedade intelectual e estímulo ao desenvolvimento de pesquisas científicas (que geram propriedade intelectual) e em políticas de imigração (em especial as ligadas à concessão de visto de trabalho para os chamados skilled workers).

Os EUA, como uma das grandes forças impulsionadoras da indústria tech, sempre foram vistos como determinante para a definição das posturas deste mercado no mundo todo. É natural então, que uma presidência Trump – potencializada pelo Brexit – não podemos esquecer que o Reino Unido é o segundo produtor mundial de propriedade intelectual, atrás apenas dos EUA, leve a uma reavaliação estratégica da área em relação as suas políticas. Já começaram a circular e-mails pelo Vale do Silício, propondo o reposicionamento para a defesa do corte de impostos para a área e o comprometimento em relação à repatriação de divisas.

Entendo a postura e reconheço a necessidade de reposicionamento – em especial se levarmos em consideração que Trump declarou em campanha que iria iniciar uma ação antitruste contra a Amazon e prometeu forçar a Apple a fabricar seus produtos nos EUA. Mas um dos argumentos mais poderosos das empresas de tecnologia em relação à sua própria importância, sempre foi o fato de que suas metas não eram apenas financeiras, mas abarcavam a construção de um futuro progressista. Sim, queriam dinheiro, mas também queriam construir um mundo melhor em termos filosóficos e democráticos – protegiam a educação e o conhecimento como modo de empoderar as pessoas e estimulá-las a quererem se tornar mais inteligentes e cultas. A lógica era que pessoas mais inteligentes tinham mais possibilidades de inovar.

Thomas Friedman – o autor do livro “O Mundo é Plano”, que propiciou muita da base conceitual para os argumentos defendidos pela indústria tech – escreveu sobre o resultado das eleições americanas, no texto intitulado “Homeless in America”, que o chamado “aprendizado para a vida toda” (no original lifelong learning) poderia ser uma fonte inesgotável de stress para algumas pessoas.

O risco que esta visão de mundo coloca é: se o aprendizado pode fazer mais mal do que bem e se algumas pessoas, não apenas o rejeitam, mas agem conscientemente para impedir a formação de um ambiente que estimule o desenvolvimento da sua fonte (o conhecimento), por que priorizar a educação?

40% das pesquisas científicas realizadas pelo Reino Unido eram financiadas pela União Europeia (rejeitada pela maioria dos britânicos). Facebook e Twitter têm sido apontados como causadores do declínio do jornalismo e da irrelevância dos fatos (e de quebra contribuído para a expansão do trolling, racismo e misoginia que caracterizaram a campanha do agora presidente Trump). O crescimento de um sentimento anti-tech pode, de verdade, mudar a direção que as políticas educacionais vinham tomando nos países desenvolvidos (que queira ou não, dão o tom para o restante do mundo).

O efeito colateral pode ser a criação de uma elite intelectual tecnológica – porque a indústria continuará e precisará de pessoas que tenham a habilidade de criar propriedade intelectual. Mas, talvez o sonho de democratizar esta habilidade tenha acabado.

Inteligência Artificial e a Educação

outubro 27, 2016 § Deixe um comentário


Como retrospectiva, sugiro a leitura dos textos anteriores sobre o assunto. O primeiro, a respeito do “Estudo de 100 anos para a Inteligência Artificial”; o segundo, referente à definição do que é IA; o terceiro, abordando as tendências; e o quarto a respeito do impacto no mercado de trabalho. Este é o último texto da série e aborda um dos temas mais estratégicos, pelo menos no meu entendimento, na priorização do planejamento de pessoas, organizações e países: a educação.

Dediquei boa parte da minha carreira à área de educação corporativa – sou um daqueles caras de “treinamento” que boa parte dos que trabalham em empresas de médio e grande porte já deve ter cruzado por aí. Sei que muitos “torcem o nariz” para o uso do termo “educação” em associação com a palavra “corporativa”, mas a verdade é que, com a já conhecida (enorme) lacuna na qualidade da formação educacional em nosso país, boa parte das empresas decidiu investir elas mesmas na formação do funcionário, muitas vezes indo além de conhecimentos e habilidades específicas para o seu negócio e ajudando-os em formação básica. Desta forma, em minha opinião, contribuindo para a própria educação do brasileiro. Para atuar neste ambiente, passei anos “consumindo” tudo o que pude encontrar em relação a métodos educacionais. Quando comecei a “me envolver” com machine learning, tive a grata surpresa de perceber que muitos dos conceitos que aprendi a respeito do aprendizado de gente, podia também ser aplicado ao aprendizado de máquinas. Faço esta introdução apenas para contextualizar a minha relação com o tema.

Desde os projetos Lego Mindstorms, desenvolvidos pelo MIT Media Lab a partir dos anos 1980, robôs têm se tornado ferramentas educacionais populares. Cabe aqui atribuir o crédito devido ao matemático Seymour Papert, guru de muita gente (inclusive meu), por conta do seu trabalho envolvendo o uso de tecnologias no aprendizado de crianças desde os anos 1960. Papert foi fundamental para o desenvolvimento do conceito de aprendizado multimídia, hoje parte integrante das teorias do aprendizado, assim como na evolução do campo da Inteligência Artificial.

Este caldo de ideias estimulou o desenvolvimento de diferentes frentes de atuação da IA aplicada à educação. É importante deixar claro, desde já, que nenhuma destas frentes descarta a importância da participação do ser-humano como vetor do ensino. Como citei no texto anterior, referente ao impacto no mercado de trabalho, IA pode aumentar o escopo do que é considerado tarefa de rotina, mas definitivamente o papel do professor não está entre elas. Ferramentas como os Intelligent Tutoring Systems (ITS), campos de atuação como Natural Language Processing ou aplicativos como os de Learning Analytics têm como objetivo ajudar os professores em sala de aula e em casa, expandir significativamente o conhecimento dos alunos. Com a introdução da realidade virtual no repertório educacional, por exemplo, o impacto da Inteligência Artificial no aprendizado do ser-humano deve ser de tal ordem que “periga” alterar a forma como o nosso cérebro funciona (é claro que este impacto ainda é suposição). Creio que a melhor maneira de abordar este assunto, é por meio de exemplos. Vou associá-los aos tópicos principais de IA aplicada à educação. Sempre que o exemplo vier acompanhado de um link, pode clicar. São informações adicionais sobre o assunto ou vídeos tutoriais sobre alguma ferramenta. Se por acaso ocorrer algum rickrolling, me avisem.

Robôs tutores

Ozobot, é um robozinho que ajuda crianças a entenderem a lógica por detrás da programação e a raciocinar de maneira dedutiva. Ele é “configurando” pelas próprias crianças, por meio de padrões codificados por cores, para dançar ou andar. Já os Cubelets auxiliam a criança a desenvolver o pensamento lógico através da montagem de blocos robôs, cada um com uma função específica (pensar, agir ou sentir). Os Cubelets têm sido usados para estimular o aprendizado de STEM.

Dash, é o robô oferecido pela Wonder Workshop, que permite apresentar crianças (e adultos) à robótica. É possível programar as ações do robô por meio de uma linguagem de programação visual desenvolvida pela Google, chamada Blockly ou mesmo construir apps para iOS e Android, usando linguagens mais parrudas como C ou Java.

Por fim, o PLEO rb é um robô de estimação, criado para estimular o aprendizado de biologia. A pessoa pode programar o robô para reagir a diferentes aspectos do ambiente.

Intelligent Tutoring Systems (ITS)

Os ITS começaram a ser desenvolvidos no final do século XX por vários centros de pesquisa, para auxiliar na resolução de problemas de física. A sua força sempre esteve na sua capacidade de facilitar o “diálogo” humano-máquina. Ao longo destas primeiras décadas do século XXI, começou a ser utilizado para o ensino de línguas. Carnegie Speech e Duolingo são exemplos da sua aplicação, utilizando o Automatic Speech Recognition (ASR) e técnicas de neurolinguística para ajudar os alunos a reconhecerem erros de linguagem ou pronúncia e corrigi-los.

Também têm sido usados para auxiliar no aprendizado de matemática, o Carnegie Cognitive Tutor foi adotado por escolar norte-americanas para este fim. Outros similares (Cognitive Tutors) são usados para o aprendizado de química, programação, diagnósticos médicos, genética, geografia, dentre outros. Os Cognitive Tutors são ITS que usam softwares que imitam o papel de um professor humano, oferecendo dicas quando um estudante fica com dificuldade em algum tópico, como por exemplo, um problema de matemática. Com base na pista solicitada e a resposta fornecida pelo aluno, o “tutor” cibernético oferece um feedback específico, de acordo com o contexto da dúvida.

Um outro ITS chamado SHERLOCK, desde o final da década de 1980 ajuda a Força Aérea Americana a diagnosticar problemas no sistema elétrico de suas aeronaves. Quem quiser conhecê-lo mais, sugiro este paper publicado nos primórdios da internet (não se assustem com o design).

Mas as grandes “estrelas” na constelação dos ITS são definitivamente os MOOCs (Massive Open Online Courses). Ao permitirem a inclusão de conteúdos via Wikipedia e Khan Academy e de sofisticados Learning Management Systems (LMS), baseados tanto em modelos síncronos (quando há prazos para conclusão de cada fase do curso) quanto modelos assíncronos (quando o aprendiz vai no seu ritmo), os MOOCs têm se tornado a ferramenta de aprendizagem adaptativa mais popular.

EdX, Coursera e Udacity são exemplos de MOOCs que se “alimentam” de técnicas de machine learning, neurolinguística e crowdsourcing (também conhecida em português como colaboração coletiva) para correção de trabalhos, atribuição de notas e desenvolvimento de tarefas de aprendizado. É bem verdade que a educação profissional e a de ensino superior são as maiores beneficiárias deste tipo de ITS (em comparação com os ensinos básico, médio e fundamental). A razão disto, é que o público delas, até mesmo por ser geralmente composto por adultos, tem menos necessidade de interação cara-a-cara. Espera-se que com um maior estímulo ao desenvolvimento da habilidade de metacognição, os benefícios oferecidos por estas plataformas possam ser distribuídos mais democraticamente.

Learning Analytics

Também já se sente o impacto do Big Data em educação. Todas as ferramentas apresentadas geram algum tipo de log ou algum tipo de registro de dado. Assim como aconteceu no mundo corporativo com BI (Business Intelligence) e BA (Business Analytics), a geração maciça de dados advindos da integração de IA, educação e internet, fez surgir a necessidade de se entender e contextualizá-los para melhor aproveitar as oportunidades e insights potencializados por eles.  Com isto, o campo chamado Learning Analytics tem observado um crescimento em velocidade supersônica.

A bem da verdade, é que cursos online não são apenas bons para a entrega de conhecimento em escala, são veículos naturais para a armazenagem de dados e a sua instrumentalização. Deste modo, o seu potencial de contribuição para o desenvolvimento científico e acadêmico é exponencial. O aparecimento de organizações como a Society for Learning Analytics Research (SOLAR) e de conferências como a Learning Analytics and Knowledge Conference organizada pela própria SOLAR e a Learning at Scale Conference (L@S), cuja edição de 2017 será organizada pelo MIT, refletem a importância que está se dando a este assunto em outras “praias”. IA tem contribuído para a análise do engajamento do aprendiz, seu comportamento e desenvolvimento educacional com técnicas state-of-the-art como deep learning e processamento de linguagem natural, além de técnicas de análise preditivas usadas comumente em machine learning.

Projetos mais recentes no campo de Learning Analytics têm se preocupado em criar modelos que captem de maneira mais precisa as dúvidas e equívocos mais comuns dos aprendizes, predizer quanto ao risco de abandono dos estudos e fornecer feedback em tempo real e integrado aos resultados da aprendizagem. Para tanto, cientistas e pesquisadores de Inteligência Artificial têm se dedicado a entender os processos cognitivos que envolvem a compreensão, a escrita, a aquisição de conhecimento e o funcionamento da memória e aplicar este entendimento à prática educacional, com o desenvolvimento de tecnologias que facilitem o aprendizado.

O mais incauto pode se perguntar por que com tecnologias de IA cada vez mais sofisticadas e com o aumento do esforço no desenvolvimento de soluções específicas para educação, não há cada vez mais escolas, colégios, faculdades e universidades os utilizando?

Esta resposta não é fácil e envolve diversas variáveis. A primeira delas está relacionada ao modelo mental da sociedade e ao quanto esta sociedade preza o conhecimento. Há locais em que a aplicação da IA em educação está mais avançada, como por exemplo a Coreia do Sul e a Inglaterra e outros em que já se está fazendo um esforço concentrado para tal, como por exemplo Suíça e Finlândia. Não por acaso, são países em que há bastante produção de propriedade intelectual. A segunda delas, envolve o domínio na geração do conhecimento e na sua aplicação em propriedade intelectual. Nesta variável, segue imbatível os EUA, que são responsáveis por boa parte do conhecimento produzido pelo ser-humano. Novamente, não por acaso, são os líderes no desenvolvimento do campo de IA. A terceira variável, como não poderia deixar de ser, é o custo. Não é barato e como dinheiro é um recurso escasso em qualquer lugar (em uns mais do que em outros, claro) é preciso que haja uma definição da sociedade em questão quanto às suas prioridades para se fazer este investimento. A quarta, está ligada ao acesso aos dados produzidos por estas iniciativas educacionais e as conclusões geradas. Embora haja fortes indícios de que a tecnologia impulsionada pela IA realmente impacta positivamente no aprendizado, ainda não há conclusões objetivas em relação ao tema – muito por conta da sua recência. E como o investimento é alto, são poucos os que topam ser early adopters.

De qualquer forma fica a questão, vale a pena? Quanto a isto, gosto de citar meu ex-chefe, Edmour Saiani. Sempre quando perguntado se devíamos treinar alguém, ele respondia: “se lembre que o problema nunca é você treinar a pessoa e ela sair da empresa, o problema é você não treinar e ela ficar”. Neste tipo de caso, não fazer nada é a pior opção.