março 2, 2017 § Deixe um comentário
Alerta Zika! was a collaborative event to explore the potential of data and technology to improve responses to the Zika virus (more information here). The Inter-American Development Bank organized it with the support of several partners including Rio de Janeiro City Hall and some major Universities based in the city. From December 2nd to 3rd 2016, about 10 registered teams explored the epidemiological, environmental and social factors to understand and explain the progress of this disease. It was one and a half day of hard work to sum up on the efforts to fight the Zika disease in Rio de Janeiro. We gain access to the dataset with all the cases of Zika, Dengue and Chikungunya registered in Rio de Janeiro city during 2015 and 2016. In order to know the data, our team started to ‘play’ with the dataset and check the variables. In doing so, we fancied about the Zika’s evolution pattern and its role during the outbreak at the early months of 2016.
Our hypothesis was that the disease propagation pattern and their correlations throughout time, city areas and weather could be used as an indicator to show where and when the disease spreads and help the city officials decide the best ways to allocate resources. We set as our goal then, to create a Rio de Janeiro map with a historical evolution of the Zika disease throughout time and temperature. During several conversations with representatives from the municipal health secretary, we wondered whether a social development indicator could provide insights about the spreading pattern. We decided to include HDI (Human Development Indicator) – known in Brazil as IDH – as a social parameter.
We then defined as our target variables the coordinates (latitude and longitude), the dates that the cases occurred, temperature over the seasons and social development indicators of Rio regarding income, education level and longevity. We set as our preliminary tasks the creation of a grid comprising the Rio de Janeiro city map and a data frame to aggregate the variables subset from different datasets. Our first goal as we performed an exploratory analysis was to explore the shape of the distributions. The grid helped us to check where the cases were located; exploring an area of about 400 meters, which is the mosquito range, as well as to cluster the patients’ cases in broader areas. It allowed us too to check how far the disease spread throughout the city and to identify the areas where most of the cases took place.
Performing a time-series analysis, we were able to identify a correlation between temperature and number of cases. In this point, to understand the mosquito life cycle is valuable. The aedes aegypti flourish in a temperature variation going from 23-Celsius degree to 28-Celsius degree (about 73 to 82-Fahrenheit). A few degrees below or above this threshold does not necessary kill the mosquito but makes the environment more uncomfortable to its development hence retarding its evolution. From the egg to inoculate the virus in an individual, there’s a 20 to 25 days period, so the previous month mosquito is responsible for the current month patient. As it can be seen at the plots bellow, comparing the disease cases through the city by month and the temperature variation per day of the previous month, the outbreak during March and April (plots 3 and 4) follows a perfect condition observed throughout February and March, where the 23 to 28-Celsius threshold was observed during most of the days. The red circles correspond to the areas with the majority of cases.
This led us to our first meaningful insight: the temperature from the previous month seems to affect the number of cases in the current month.
As we shift our attention to the social indicator data at hand, we were able to identify a curious behavior. Some critical areas during the outbreak shared a similarity of low IDH coefficient. The plot bellow provide a visual support. The orange circle sizes are relate to the level of IDH, smaller circle/lower IDH and vice-versa.
The highlighted areas on the plots above correspond to Maré (a neighborhood), the far-north zone and the far-west zone of the city. These areas share a lower level at the social indicators comparing to other parts of the municipality.
Although income seems not to be a social influence affecting the outbreak – as can be seen at the plot bellow – comparing the south zone behavior, the wealthiest part of the city, to the cases in March and April indicate, there is a peculiarity to consider. In this particular area, there is a huge economic disparity. The most exclusive addresses are placed within walking distance to some favelas (slums usually placed on hills around the area), where the IDH are similar to those on the previous plot).
From this observation, we draw a second meaningful insight: the social indicators (IDH) seems to count as an influence force in the areas with most number of cases during the outbreak.
As we went further on our analysis, other curious behavior caught our attention. Even as the temperature dropped away from the 23 to 28-Celsius threshold, some areas kept appearing as the top score case holders (as it can be seen in the comparisons bellow from May to July).
What these areas have in common is that woods and forests surround them all. This common factor provided the third meaningful insight we delivered: some recurrent disease focus areas seems to grow around or close to woods and forests areas.
Exploratory analysis usually is a good start to predictive modeling because helps to understand a little further the datasets and to summarize their main characteristics. Explore the data and formulate hypotheses that could lead to new data collection and experiments is a major component to extract usable information from data; suggest hypotheses; and support the selection of appropriate statistical tools and techniques. Our main goal at the data expedition were to set a first step that could help to understand the past behavior in order to prepare the ‘seeds’ to a future ‘crop’. Our third place award was a source of pride to ourselves and seems to indicate that this goal was accomplished.
After the Data Expedition
We continue exploring the data and aggregating other variables. Our goal was get some predictive model that could add on the initial exploratory analysis. These new variables were population per neighborhood and rainfall. We also add more data about temperature regarding the final months of 2016 and early january 2017.
The first choice was a simple linear regression using the variable population per neighborhood to predict cases based on population. Below some code chunks in R and statistical readings (we intend to show more info in a markdown file – a type of file where can be shown text, code and plots together).
A quickly view of the dataset:
“bairro” stands for neighborhood; “casos_zika” for Zika cases; and “populacao” for population.
Some statistical Reading from Rstudio console:
In plot 1 (Residuals vs. Fitted), at some point there’s equally spread residuals around a horizontal line, but also there are outliers. In plot 2 (Normal Q-Q) the residuals seems to be normally distributed, at least at some extend.
In plot 3 (Scale-Location), complementing plot 1, some residuals are spread equally along the ranges of predictors showing some homoscedasticity. Plot 4 (Residuals Vs. Leverage) identified the influential observation as #120 and #23.
Based on the thesis that the mosquito has a faster cycle when there’s a temperature threshold between 23 and 28 degrees Celsius, we tried to check if rainfall also helps in the proliferation. Then, we tried to identify the relationship between these two variables and the number of Zika cases. Our second choice was to use a multiple regression model to meet this goal. This analysis were performed in Phython.
The chart below shows that the months with the highest incidence of the temperature threshold are those between December 2015 and April 2016. We could also notice that trend occurring again in December 2016 and early January 2017.
Green and red lines: temperature threshold
Pink and grey lines: min and max temperature
The next sequence of plots show that there is a similar positive trend between the curves showing the number of cases per week, the temperature and rainfall. The analysis was performed based on the neighborhoods of Campo Grande (1), Santa Cruz (2), and Guaratiba(3), that were severed affected during the 2016 outbreak.
Blue line: cases per week
Red line: temperature under the threshold
Yellow line: rainfall
We decided to use the multiple regression model to build a predictive application. We tested the model through a series of plots comparing the actual data with a predicted one applied in a test dataset used to fit the model.
Testing values Vs. Predicted values for Rio de Janeiro
Green dots: testing values
Gray dots: predicted values
Real Cases Vs. Predicted Cases for Rio de Janeiro
Green lines: Real Cases
Gray lines: Predicted Cases
Real Cases Vs. Predicted Cases Comparison for Rio de Janeiro – december 2015 & 2016
Green: Real Cases
Gray: Predicted Cases
In this particular case (above plot) we didn’t had available the number of actual Zika cases in December 2016, so we only predicted the number of cases.
Analysis per neighborhood: Campo Grande.
Statistical Readings from Jupyter Notebook console:
Green line: real cases
Gray line: predicted cases
Analysis per neighborhood: Santa Cruz.
Green line: real cases
Gray line: predicted cases
Analysis per neighborhood: Guaratiba.
Green lines: testing values
Gray lines: predicted values
We created a prototype to apply the model. It’s a website with information about the number of Zika cases per month and graphics showing the actual cases and the predicted ones per neighborhood.
For those who would like to check it out, it’s available here.
janeiro 3, 2017 § Deixe um comentário
In any professional sports, how well the teams spend their money means more than the difference between a championship and a flop. It’s no different with baseball, the sport that introduces the concepts of professionalism and moneyball.
For those who are not used to the term, moneyball is used to describes baseball operations in which a team endeavors to analyze the market for baseball players and buy who is undervalued and sell who is overvalued. Unlike a common misconception, it is not about on-base percentage (a measure of how often a batter reaches base for any reason other than a fielding error, fielder’s choice, dropped/uncaught third strike, fielder’s obstruction, or catcher’s interference), but to explore methods of rating players.
It is most commonly used to refer to the strategy used by the front office of the 2002 Oakland Athletics, with approximately US$44 million in salary, were competitive with larger market teams such as the New York Yankees, who spent over US$125 million in payroll that same season. It derives its name from the 2003 book from Michael Lewis about the team’s analytical, evidence-based, sabermetric approach. Suffice to say that there is also a 2011 motion picture of the same name, based on the book, starring Brad Pitt and Jonah Hill, for which the term became mainstream.
I will be using data from two very useful databases on baseball teams, players and seasons. One is curated by Sean Lahman, available at http://www.seanlahman.com/baseball-archive/statistics/. The other, is from the nutshell package, which contains data sets used as examples in the book “R in a Nutshell” by Joseph Adler. More information about the package is available at https://cran.r-project.org/web/packages/nutshell/index.html.
The reason for pick two different datasets instead of one is because I wanted to perform the analysis in different sources. The decision proved right for account of speed and practicality too. The Lahman data set uses data on pitching, hitting and fielding performance and other tables from 1871 through 2015. As we can see, is thoroughly and updated. The Nutshell’s on the other hand, is better designed for learning approaches (at least in my opinion) and comprises statistical data from 2000 – 2008 for every Major League Baseball team.
For those who are not familiar with baseball, a few points of explanation are important:
- Major League Baseball is a professional baseball league, where teams pay players to play baseball (I know it sounds silly and redundant, but I have to be sure everybody knows what we are talking about here).
- The goal of each team is to win as many games out of a 162 game season as possible. This allows a ticket to the post season and a chance to play at the World Series, where the champion is defined.
- Teams win games by scoring more runs than their adversary. A run is computed when a player advances around first, second and third base and returns safely to home plate (in other words, do a round around the infield).
- In principle, better players are expensive, so teams that want good players need to spend more money.
- Teams that spend the most, frequently won the most (not always but so often that is fair to consider it a case of cause and effect).
I provide the analysis in both data sets in a Markdown page that can be accessed @marcelo_tibau/exploratory-and-baseball
One of the reasons that I chose the nutshell data set is because it is used as a case study from the book “R in a Nutshell” by Joseph Adler. Inspired by this case, I developed a simple app to predicts the number of runs scored by a team based on a linear model which predicts the number of runs scored by a team. For those curious to see it, a demo for the app can be found @baseball-prediction
dezembro 8, 2016 § Deixe um comentário
On the behalf of my teammates Benjamin Alves and Cristiano Franco, as well as myself, I would like to thank the Inter-American Development Bank for the 3rd place awarded to our team at the “Alerta Zika” data expedition. More than the prize itself our greatest proud was to be able to provide three insights to the municipal health secretary and sum up on the efforts to fight the Zika disease in Rio de Janeiro.
novembro 21, 2016 § Deixe um comentário
There’s a song by Leonard Cohen that states “everybody knows” and “that’s how it goes”. The same goes for the fact that the amount of data online activities generate is skyrocketing. This is true because more and more of our commerce, entertainment, and communication are occurring over the Internet and despite concerns about globalization and information accuracy, it’s a trend that is impossible to curb. Like a steamrolling, this data tsunami touches us all, so it’s more than natural that it also catches education. With analytics and data mining experiments in education starting to proliferating, sorting out fact from fiction and identifying research possibilities and practical applications becomes a necessity.
Educational data mining and learning analytics work based on assumption of patterns and prediction. Both disciplines are used to research and build models in several areas that influence online learning systems. The bottom-line here is if we can discern the pattern in the data and make sense of what is going on, we can predict what should come next and take the appropriate action. The business world name it insight and it’s the difference of make “big bucks” or be caught unprepared. So believe me, it’s valuable.
Data mining with educational purposes can be used basically in two big areas. One is user modelling, which encompasses what a learner knows, what a learner’s behavior and motivation are, what the user experience is like, and how satisfied users are with online learning. Well, the same kind of data used to model can be used to profile users. Profiling means grouping similar users into categories using salient characteristics. These categories then can be used to offer experiences to groups of users or to make recommendations individually and proceed adaptations to how an online learning system performs.
A little explanation it’s needed at this point: online learning systems refer to online courses or to learning software or interactive learning environments that use intelligent tutoring systems, virtual labs, or simulations. They may be offered through a learning or course management system and through a learning platform. When online learning systems use data to change in response to student performance, they become adaptive learning environments.
Increasing use of online learning offers some opportunities, such as to integrate assessment and learning and gather information in nearly real time, to improve future instruction. This process goes like this: as students work, the system captures their inputs, collecting evidence of activities, knowledge, and strategy used. Everything counts here, the information each student selects or inputs, the number of attempts the student makes, the allocation of time across parts of the process, and the number of hints and feedback given.
As students can benefit from detailed learning data, so the broader education community can thrive from an interconnected feedback system – such as what works better for a particular content and how to stimulate necessary skills like metacognition. As put by the U.S. Department of Education in a 2010 report (National Education Technology Plan – NETP, 2010a, p. 35): “The goal of creating an interconnected feedback system would be to ensure that key decisions about learning are informed by data and that data are aggregated and made accessible at all levels of the education system for continuous improvement”.
As it’s expected that these learning systems be able to exploit in detail activity data from learners to recommend what the next activity should be, and also to predict how a particular student will perform in future learning activities, being able to connect the dots and produce insights presents itself as a necessity. It’s precisely here that enters data mining and learning analytics.
Understanding big data
Although using data to enhance decision processes is not new – they are used in what is known as business intelligence or analytics – it’s a relatively new approach concerning education. As their business counterparts, learning analyses can discern historical patterns and trends from data and create models that predict future trends and patterns and comprise applied techniques from computer science, mathematics, and statistics in order to extract usable information from very large datasets.
Usually, data are stored into a structured format, which are easy for computers to manipulate. However, the data gathered from learning platforms have a semantic structure that is difficult to discern computationally without human aid, hence is called unstructured data (e.g. texts or images). To analyze these events is required techniques that work with unstructured text and image data and data from multiple sources. When these data comprise a vast amount, we have the famous big data. It’s important to understand that big data does not have a fixed size, it’s a concept. As any given number assigned to define it would change as computing technology advances to handle more data, big data is defined relative to current capabilities.
Big data, educational data mining and learning analytics
The big amount of data snared from online behavior feeds algorithms and enables them to infer the users’ knowledge, intentions, and interests and to build models that can predict future behavior and interest. In order to achieve this goal data mining and analytics are applied as the fields of educational data mining and learning analytics. Although there is no hard distinction between these two, they have had different research histories and distinct research areas.
In general, educational data mining (also known as EDM) looks for new patterns in data and develops new algorithms and models, using statistics, artificial intelligence, and (of course) data mining to analyze the data collected during teaching and learning. Learning analytics, for instance, applies known predictive models in instructional systems, using different knowledge, such as information science, sociology and psychology, as well as statistics, AI, and data mining in order to influence educational practice.
Educational data mining
Diving a little bit into the subject, the need for understanding how students learn is the major force behind educational data mining. The suite of computational and psychological methods and research approaches supported by interactive learning methods and tools, such as intelligent tutoring systems, simulations, games, have opened up opportunities to collect and analyze student data and to discover patterns and trends in those data. Data mining algorithms help find variables that can be explored for modelling and by applying data mining methods that classify data and find relationships, these models can be used to change what students experience next or even to recommend outside academic assignments to support their learning.
An important feature of educational data is that they are hierarchical. All the data (from the answers, the sessions, the teachers, the classrooms, etc.) are nested inside one another. Grouping it by time, sequence, and context provide levels of information that can show the impact of the practice sessions length or the time spent to learning – as well as how concepts build on one another and how practice and tutoring should be ordered. Providing the right context to these information help to explain results and to know where the proposed instructional strategy works or not. The methods that have been important to stimulate developments in mining educational data are those related:
1) To prediction, for understanding what behaviors in an online learning environment, such as participation in discussion forums and taking practice tests, can be used to predict outcome such as which students might fail a class. It helps to develop models that provide insights that might help to better connect procedures or facts with the specific sequence and amount of practice items that best stimulate the learning. It also helps to forecast or understand student educational outcomes, such as success on posttests after tutoring.
2) To clustering, meaning to find data points that naturally group together and that can be used to split a full dataset into categories. Examples of clustering are grouping students based on their learning difficulties and interaction patterns, or grouping by similarity of recommending actions and resources.
3) To relationship, meaning discover relationships between variables in a dataset and encoding them as rules for later use. These techniques can be used to associate student activity (in a learning management system or discussion forums) with student grades, to associate content with user types to build recommendations for content that is likely to be interesting or even to make changes to teaching approaches. This latter area, called teaching analytics, is of growing importance and key to discover which pedagogical strategies lead to more effective or robust learning.
4) To distillation, which is a technique that involves depicting data in a way that enables humans to quickly identify or classify features of the data. This area of educational data mining improves machine learning models by allowing humans to identify patterns or features easier, such as student learning actions, student behaviors or collaboration among students.
5) To model discovery, which is a technique that involves using a validated model (developed through such methods as prediction or clustering) as a component in further analysis. Discovery with models supports discovery of relationships between student behaviors and student characteristics or contextual variables, analysis of research questions across a wide variety of contexts, and integration of psychometric modeling into machine learned models.
Learning analytics emphasizes measurement and data collection as activities necessary to undertake, understand, analyze and report data with educational purposes. Unlike educational data mining, learning analytics generally does not emphasize reducing learning into components but instead seeks to understand entire systems and to support human decision making. Draws on a broad array of academic disciplines, incorporating concepts from information science, computer science, sociology, statistics, psychology, and learning sciences.
The goal is to answer important questions that affect the way students learn and help us to understand the best way to improve organizational learning systems. Therefore, it emphasizes models that could answer questions such as:
- When are students ready to move on to the next topic?
- When is a student at risk for not completing a course?
- What is the best next course for a given student?
- What kind of help should be provide?
As a visual representation of analytics is critical to generate actionable analyses, the information is often represented as “dashboards” that show data in an easily digestible form. Although the methods used in learning analytics are draw from those used in educational data mining, it may employ additionally social network analysis (to determined student-to-student and student-to-teacher relationships and interactions that help to identify disconnected students, influencers, etc.) and social metadata to determine what a user is engaged with.
As content moves online and mobile devices for interacting with content enable a 24/7 access, understand what data reveal can lead to fundamental shifts in teaching and learning systems as a whole. Learners and educators at all levels can draw benefits from understanding the possibilities of the use of big data in education. Data mining and learning analytics are two powerful tools that can help shape the future of human learning.
 Anaya, A. R., and J. G. Boticario. 2009. “A Data Mining Approach to Reveal Representative Collaboration Indicators in Open Collaboration Frameworks.” In Educational Data Mining 2009: Proceedings of the 2nd International Conference on Educational Data Mining, edited by T. Barnes, M. Desmarais, C. Romero, and S. Ventura, 210–219.
 Amershi, S., and C. Conati. 2009. “Combining Unsupervised and Supervised Classification to Build User Models for Exploratory Learning Environments.” Journal of Educational Data Mining 1 (1): 18–71.
 Arnold, K. E. 2010. “Signals: Applying Academic Analytics. EDUCAUSE Quarterly 33 (1). http://www.educause.edu/EDUCAUSE+Quarterly/EDUCAUSEQuarterlyMagazineVolum/SignalsApplyingAcademicAnalyti/199385
 Bajzek, D., J. Brooks, W. Jerome, M. Lovett, J. Rinderle, G. Rule, and C. Thille. 2008. “Assessment and Instruction: Two Sides of the Same Coin.” In Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2008, edited by G. Richards. Chesapeake, VA: AACE, 560–565.
 Baker, R. S. J. d. 2011. “Data Mining for Education.” In International Encyclopedia of Education, 3rd ed., edited by B. McGaw, P. Peterson, and E. Baker. Oxford, UK: Elsevier.
 Baker, R. S. J. d., A.T. Corbett, and V. Aleven. 2008. “More Accurate Student Modeling Through Contextual Estimation of Slip and Guess Probabilities in Bayesian Knowledge Tracing.” In Proceedings of the 9th International Conference on Intelligent Tutoring Systems. Berlin, Heidelberg: Springer-Verlag, 406–415.
 Baker, R. S. J. d., A.T. Corbett, K. R. Koedinger, and I. Roll. 2006. “Generalizing Detection of Gaming the System Across a Tutoring Curriculum.” In Proceedings of the 8th International Conference on Intelligent Tutoring Systems. Berlin, Heidelberg: Springer-Verlag, 402–411.
 Baker, R. S., A. T. Corbett, K. R. Koedinger, and A. Z. Wagner. 2004. “Off-Task Behavior in the Cognitive Tutor Classroom: When Students ‘Game the System.’” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’04). New York, NY: Association for Computing Machinery, 383–390.
 Baker, R. S. J. d., S. M. Gowda, and A. T. Corbett. 2011. “Automatically Detecting a Student’s Preparation for Future Learning: Help Use Is Key.” In Proceedings of the 4th International Conference on Educational Data Mining, edited by M. Pechenizkiy, T. Calders, C. Conati, S. Ventura, C. Romero, and J. Stamper, 179–188.
 Baker, R. S. J. D., and K. Yacef. 2009. “The State of Educational Data Mining in 2009: A Review and Future Visions.” Journal of Educational Data Mining 1 (1): 3–17.
 Balduzzi, M., C. Platzer, T. Holz, E. Kirda, D. Balzarotti, and C. Kruegel. 2010. Abusing Social Networks for Automated User Profiling. Research Report RR-10-233 – EURECOM, Sophia Antipolis; Secure Systems Lab, TU Wien and UCSB.
 Beck, J. E., and J. Mostow. 2008. “How Who Should Practice: Using Learning Decomposition to Evaluate the Efficacy of Different Types of Practice for Different Types of Students.” In Proceedings of the 9th International Conference on Intelligent Tutoring Systems.
 Bienkowski, Marie; Feng, Mingyu; Means, Barbara. Enhancing Teaching and Learning Through Educational Data Mining and Learning Analytics: An Issue Brief. Center for Technology in Learning. SRI International. 2012.
 Blikstein, P. 2011. “Using Learning Analytics to Assess Students’ Behavior in Open-Ended Programming Tasks.” Proceedings of the First International Conference on Learning Analytics and Knowledge. New York, NY: Association for Computing Machinery, 110–116.
 Brown, W., M. Lovett, D. Bajzek, and J. Burnette. 2006. “Improving the Feedback Cycle to Improve Learning in Introductory Biology Using the Digital Dashboard.” In Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2006I, edited by G. Richards. Chesapeake, VA: AACE, 1030–1035.
 Corbett, A. T., and J. R. Anderson. 1994. “Knowledge Tracing: Modeling the Acquisition of Procedural Knowledge.” User Modeling and User-Adapted Interaction 4 (4): 253–278.
 Crawford, V., M. Schlager, W. R. Penuel, and Y. Toyama. 2008. “Supporting the Art of Teaching in a Data-Rich, High-Performance Learning Environment.” In Data-Driven School Improvement, edited by E. B. Mandinach and M. Honey. New York, NY: Teachers College Press, 109–129.
 Dawson, S., L. Heathcote, and G. Poole. 2010. “Harnessing ICT Potential: The Adoption and Analysis of ICT Systems for Enhancing the Student Learning Experience.” International Journal of Educational Management 24 (2): 116–128.
 EDUCAUSE. 2010. Next Generation Learning Challenges: Learner Analytics Premises. http://www.educause.edu/Resources/NextGenerationLearningChalleng/215028
 Elias, T. 2011. Learning Analytics: Definitions, Processes and Potential. http://learninganalytics.net/LearningAnalyticsDefinitionsProcessesPotential.pdf
 Feng, M., N. T. Heffernan, and K. R. Koedinger. 2009. “User Modeling and User-Adapted Interaction: Addressing the Assessment Challenge in an Online System That Tutors as It Assesses.” The Journal of Personalization Research (UMUAI journal) 19 (3): 243–266.
 Gerhard, F. 2001. “User Modeling in Human-Computer Interaction.” User Modeling and User-Adapted Interaction 11: 65–86.
 Goldstein, P. J. 2005. Academic Analytics: The Use of Management Information and Technology in Higher Education. EDUCAUSE Center for Applied Research. http://net.educause.edu/ir/library/pdf/ECM/ECM0508.pdf
 Graf, S., and Kinshuk. In press. “Dynamic Student Modeling of Learning Styles for Advanced Adaptivity in Learning Management Systems.” International Journal of Information Systems and Social Change.
 Hamilton, L., R. Halverson, S. Jackson, E. Mandinach, J. Supovitz, and J. Wayman. 2009. Using Student Achievement Data to Support Instructional Decision Making (NCEE 2009-4067). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.
 Jeong, H., and G. Biswas. 2008. “Mining Student Behavior Models in Learning-by-Teaching Environments.” In Proceedings of the 1st International Conference on Educational Data Mining, Montréal, Québec, Canada,127–136.
 Johnson, L., A. Levine, R. Smith, and S. Stone. 2010. The 2010 Horizon Report. Austin, TX: The New Media Consortium. http://wp.nmc.org/horizon2010/
 Johnson, L., R. Smith, H. Willis, A. Levine, and K. Haywood. 2011. The 2011 Horizon Report. Austin, TX: The New Media Consortium. http://net.educause.edu/ir/library/pdf/HR2011.pdf
 Kardan, S., and C. Conati. 2011. A Framework for Capturing Distinguishing User Interaction Behaviours in Novel Interfaces. In Proceedings of the 4th International Conference on Educational Data Mining, edited by M. Pechenizkiy, T. Calders, C. Conati, S. Ventura, C. Romero, and J. Stamper, 159–168.
 Köck, M., and A. Paramythis. 2011. “Activity Sequence Modeling and Dynamic Clustering for Personalized E-Learning. Journal of User Modeling and User-Adapted Interaction 21 (1-2): 51–97.
 Koedinger, K. R., R. Baker, K. Cunningham, A. Skogsholm, B. Leber, and J. Stamper. 2010. “A Data Repository for the EDM Community: The PSLC DataShop.” In Handbook of Educational Data Mining, edited by C. Romero, S. Ventura, M. Pechenizkiy, and R.S.J.d. Baker. Boca Raton, FL: CRC Press, 43–55.
 Koedinger, K., E. McLaughlin, and N. Heffernan. 2010. “A Quasi-experimental Evaluation of an On-line Formative Assessment and Tutoring System.” Journal of Educational Computing Research 4: 489–510.
 Lauría, E. J. M., and J. Baron. 2011. Mining Sakai to Measure Student Performance: Opportunities and Challenges in Academic Analytics. http://ecc.marist.edu/conf2011/materials/LauriaECC2011-%20Mining%20Sakai%20to%20Measure%20Student%20Performance%20-%20final.pdf
 Long, P. and Siemens, G. 2011. “Penetrating the Fog: Analytics in Learning and Education.” EDUCAUSE Review 46 (5).
 Lovett, M., O. Meyer, and C. Thille. 2008. “The Open Learning Initiative: Measuring the Effectiveness of the OLI Statistics Course in Accelerating Student Learning.” Journal of Interactive Media in Education Special Issue: Researching Open Content in Education. 14. http://jime.open.ac.uk/2008/14.
 Macfayden, L. P., and S. Dawson. 2010. “Mining LMS Data to Develop an ‘Early Warning’ System for Educators: A Proof of Concept.” Computers & Education 54 (2): 588–599.
 Manyika, J., M. Chui, B. Brown, J. Bughin, R. Dobbs, C. Roxburgh, and A. H. Byers. 2011. Big Data: The Next Frontier for Innovation, Competition, and Productivity. McKinsey Global Institute. http://www.mckinsey.com/Insights/MGI/Research/Technology_and_Innovation/Big_data_The_next_frontier_for_innovation
 Martin, B., A. Mitrovic, K. Koedinger, and S. Mathan. 2011. “Evaluating and Improving Adaptive Educational Systems with Learning Curves.” User Modeling and User-Adapted Interaction 21 (3): 249–283.
 Means, B., C. Chelemer, and M. S. Knapp (eds.). 1991. Teaching Advanced Skills to at-Risk Students: Views from Research and Practice. San Francisco, CA: Jossey-Bass.
 Merceron, A., and K. Yacef. 2010. “Measuring Correlation of Strong Symmetric Association Rules in Educational Data.” In Handbook of Educational Data Mining, edited by C. Romero, S. Ventura, M. Pechenizkiy, and R. S. J. d. Baker. Boca Raton, FL: CRC Press, 245–256.
 New Media Consortium. 2012. NMC Horizon Project Higher Ed Short List. Austin, TX: New Media Consortium. http://www.nmc.org/news/download-horizon-project-2012-higher-ed-short-list.
 O’Neil, H. F. 2005. What Works in Distance Learning: Guidelines. Greenwich CT: Information Age Publishing.
 Reese, D. D., R. J. Seward, B. G. Tabachnick, B. Hitt, A. Harrison, and L. McFarland. In press. “Timed Report Measures Learning: Game-Based Embedded Assessment.” In Assessment in Game-Based Learning: Foundations, Innovations, and Perspectives, edited by D. Ifenthaler, D. Eseryel, and X. Ge. New York, NY: Springer.
 Ritter, S., J. Anderson, K. Koedinger, and A. Corbett. 2007. “Cognitive Tutor: Applied Research in Mathematics Education.” Psychonomic Bulletin & Review 14 (2): 249–255.
 Romero C. R., and S. Ventura. 2010. “Educational Data Mining: A Review of the State of the Art.” IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and Reviews 40 (6): 601–618.
 Siemens, G., and R. S. J. d. Baker. 2012. “Learning Analytics and Educational Data Mining: Towards Communication and Collaboration.” In Proceedings of LAK12: 2nd International Conference on Learning Analytics & Knowledge, New York, NY: Association for Computing Machinery, 252–254.
 U.S. Department of Education. 2010a. National Education Technology Plan. http://www.ed.gov/technology/netp-2010.
———. 2010b. Use of Education Data at the Local Level: From Accountability to Instructional Improvement. Washington, DC: U.S. Department of Education.
———. 2010c. Basic Concepts and Definitions for Privacy and Confidentiality in Student Education Records. SLDS Technical Brief 1. NCES 2011-601. Washington, DC: U.S. Department of Education.
———. 2012a. December 2011- Revised FERPA Regulations: An Overview for SEAS and LEAS. (PDF file). Washington, DC: U.S. Department of Education. http://www.ed.gov/policy/gen/guid/fpco/pdf/sealea_overview.pdf
———. 2012b. The Family Educational Rights and Privacy Act: Guidance for Reasonable Methods and Written Agreements (PDF file). Washington, DC: U.S. Department of Education. http://www.ed.gov/policy/gen/guid/fpco/pdf/reasonablemtd_agreement.pdf
 U.S. Department of Education, Office of Educational Technology, Enhancing Teaching and Learning Through Educational Data Mining and Learning Analytics: An Issue Brief, Washington, D.C., 2012.
 VanLehn, K., C. Lynch, K. Schulze, J. A. Shapiro, R. H. Shelby, L. Taylor, D. Treacy, A. Weinstein, and M. Wintersgill. 2005. “The Andes Physics Tutoring System: Lessons Learned.” International Journal of Artificial Intelligence in Education 15 (3): 147–204.
 Viégas, F. B., M. Wattenberg, M. McKeon, F. Van Ham, and J. Kriss. 2008. “Harry Potter and the Meat-Filled Freezer: A Case Study of Spontaneous Usage of Visualization Tools.” In Proceedings of the 41st Annual Hawaii International Conference on System Sciences, 159.
 Wayman, J. C. 2005. “Involving Teachers in Data-Driven Decision Making: Using Computer Data Systems to Support Teacher Inquiry and Reflection.” Journal of Education for Students Placed At Risk 10 (3): 295–308.
novembro 17, 2016 § Deixe um comentário
If you are into statistics probably already know the importance of regression analysis to statistical modelling. If you are not, it is necessary to say that it is important stuff and is use for estimating the relationships among variables. There are many techniques and extensions for carrying out regression analysis such as linear regression, multivariate linear regression (also known as general linear model), some variances as Bayesian multivariate linear regression, least-squares and so on.
What these approaches have in common is an equation of the form y = a + bx, where x is the explanatory variable and y is the dependent variable. The slope of the line is b, and a is the intercept (the value of y when x = 0).
Harold V. Henderson and Paul F. Velleman provided a famous example of the use of a regression model in their paper “Building Multiple Regression Models Interactively”, published in 1981 by Biometrics magazine (to those whom are interested in read the original one, check http://www.mortality.org/INdb/2008/02/12/8/document.pdf).
There they used what is known as the “Gasoline Mileage Data”, which became a dataset used around the world for educational purposes. The data were extracted from 1974 Motor Trend magazine and comprise gasoline mileage in miles per gallon (MPG), and ten aspects of automobile design and performance for 32 automobiles (1973-74 models). I explored this data using the dataset created for R programming called “mtcars”. As I believe that any analysis has to have a purpose, mine attempted to determine whether an automatic or manual transmission is better for MPG and to quantify the MPG difference.
In doing so, I composed the following paper with linear and multiple regression models and the codes to perform the modelling in R, as well as my personal analysis. The paper can be accessed at http://rpubs.com/marcelo_tibau/228029
novembro 11, 2016 § Deixe um comentário
To those whom are eager to know more about Machine Learning and how it goes in a real life work, I share a paper I wrote with analysis, codes and algorithms of a Machine Learning Prediction Assignment. I wrote the codes in R, which is a statistical programming language. I also would like to thank PUC-Rio for providing the dataset that I worked.
Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement – a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it. In this project, your goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways.
The data for this project came from the Human Activity Recognition study, conducted by Pontifícia Universidade Católica – Rio de Janeiro.
Ugulino, W.; Cardador, D.; Vega, K.; Velloso, E.; Milidiu, R.; Fuks, H. Wearable Computing: Accelerometers’ Data Classification of Body Postures and Movements. Proceedings of 21st Brazilian Symposium on Artificial Intelligence. Advances in Artificial Intelligence – SBIA 2012. In: Lecture Notes in Computer Science. , pp. 52-61. Curitiba, PR: Springer Berlin / Heidelberg, 2012. ISBN 978-3-642-34458-9. DOI: 10.1007/978-3-642-34459-6_6.
It can be accessed at:
novembro 10, 2016 § Deixe um comentário
A indústria da tecnologia defendeu por anos o argumento de que para se ter uma economia baseada na inovação, era preciso que se estimulasse a educação, o conhecimento aplicado à propriedade intelectual e o multiculturalismo. Isto impactou diretamente em políticas públicas e legislação de diversos países em relação à organização e às metodologias de seu sistema educacional, proteção à propriedade intelectual e estímulo ao desenvolvimento de pesquisas científicas (que geram propriedade intelectual) e em políticas de imigração (em especial as ligadas à concessão de visto de trabalho para os chamados skilled workers).
Os EUA, como uma das grandes forças impulsionadoras da indústria tech, sempre foram vistos como determinante para a definição das posturas deste mercado no mundo todo. É natural então, que uma presidência Trump – potencializada pelo Brexit – não podemos esquecer que o Reino Unido é o segundo produtor mundial de propriedade intelectual, atrás apenas dos EUA, leve a uma reavaliação estratégica da área em relação as suas políticas. Já começaram a circular e-mails pelo Vale do Silício, propondo o reposicionamento para a defesa do corte de impostos para a área e o comprometimento em relação à repatriação de divisas.
Entendo a postura e reconheço a necessidade de reposicionamento – em especial se levarmos em consideração que Trump declarou em campanha que iria iniciar uma ação antitruste contra a Amazon e prometeu forçar a Apple a fabricar seus produtos nos EUA. Mas um dos argumentos mais poderosos das empresas de tecnologia em relação à sua própria importância, sempre foi o fato de que suas metas não eram apenas financeiras, mas abarcavam a construção de um futuro progressista. Sim, queriam dinheiro, mas também queriam construir um mundo melhor em termos filosóficos e democráticos – protegiam a educação e o conhecimento como modo de empoderar as pessoas e estimulá-las a quererem se tornar mais inteligentes e cultas. A lógica era que pessoas mais inteligentes tinham mais possibilidades de inovar.
Thomas Friedman – o autor do livro “O Mundo é Plano”, que propiciou muita da base conceitual para os argumentos defendidos pela indústria tech – escreveu sobre o resultado das eleições americanas, no texto intitulado “Homeless in America”, que o chamado “aprendizado para a vida toda” (no original lifelong learning) poderia ser uma fonte inesgotável de stress para algumas pessoas.
O risco que esta visão de mundo coloca é: se o aprendizado pode fazer mais mal do que bem e se algumas pessoas, não apenas o rejeitam, mas agem conscientemente para impedir a formação de um ambiente que estimule o desenvolvimento da sua fonte (o conhecimento), por que priorizar a educação?
40% das pesquisas científicas realizadas pelo Reino Unido eram financiadas pela União Europeia (rejeitada pela maioria dos britânicos). Facebook e Twitter têm sido apontados como causadores do declínio do jornalismo e da irrelevância dos fatos (e de quebra contribuído para a expansão do trolling, racismo e misoginia que caracterizaram a campanha do agora presidente Trump). O crescimento de um sentimento anti-tech pode, de verdade, mudar a direção que as políticas educacionais vinham tomando nos países desenvolvidos (que queira ou não, dão o tom para o restante do mundo).
O efeito colateral pode ser a criação de uma elite intelectual tecnológica – porque a indústria continuará e precisará de pessoas que tenham a habilidade de criar propriedade intelectual. Mas, talvez o sonho de democratizar esta habilidade tenha acabado.