Featured Post

Fix, Don’t Discard MCAS/PARCC

This fall I had one on one conversations with many of our state's leaders and experts on the misplaced opposition to testing in gen...

Monday, December 28, 2015

MOOCs will replace APs


·         ·        More people signed up for MOOCs in 2015 than they did in the first three years of the “modern” MOOC movement (which started in late 2011—when the first Stanford MOOCs took off).
·         ·        Coursera accounts 35% of all MOOC users, EdX 18%, Canvas 7%, Future Learn 6%, most others on radar account for 1-4%
·         1,800 new courses in 2015
·         Edu and teaching accounts for 9.5% of classes
·         Newest trend in business model has been MOOC providers creating their own credentials as main source of revenue
·         Avg coursera certificate course is $56, EdX $53
·         Large increase in self-paced courses
·         Providers targeting high school market for stake in college readiness

https://www.edsurge.com/news/2015-12-28-moocs-in-2015-breaking-down-the-numbers

Tuesday, December 15, 2015

Competency-Based Progression

Competency-Based Progression
Sanborn Regional High School in Kingston, NH

1.       We believe that all students can and must learn. In each of our courses, our competencies include explicit, measurable, transferable learning objectives that empower our students. They address both the application and creation of knowledge and the development of work study practices.

2.       We believe that all teachers must approach grading in the same manner. Grades represent what students learn, not what they earn. We use a four-point letter rubric scale to report both assignment and competency levels of achievement. Numerical “grades” are used only to report final overall course grades so we can compute class rank and GPA for college application purposes. We do not mix academic grades with behavior grades.

3.       We believe that the most significant learning takes place for our students through reflection and reassessment. Our students use the feedback they receive from rubrics to help them understand how to improve their learning.

4.       We believe that our teachers are most effective when they work in teams. We use the Professional Learning Community (PLC) structure to focus our teams on student learning. Over the years, we have found ways to maximize the time allotted for our teachers to collaborate with their PLC’s and this time is available to our teachers every day.

5.       We believe that assessment is meaningful and a positive learning experience for students. Our teachers focus on providing quality aligned instruction and performance assessment practices that are tuned to standards, providing students with multiple opportunities to demonstrate mastery.

6.       We believe that all students must receive timely, differentiated support based on their individual learning needs. We recognize that this support cannot always be embedded within the instructional time, and therefore our school has developed a structure to provide this support school-wide at a dedicated time each school day.

7.       We believe that there are many ways for our students to demonstrate mastery of competencies and thus earn credit for their graduation requirements. At our school, we have expanded credit-bearing opportunities far beyond simple traditional classroom courses. Through these alternative pathways, we have started to create a system whereby our students can advance upon demonstrated mastery.

8.       We believe that competency education is rigorous. Rigor is not defined by how much work we assign our students. It is defined by how deeply we engage them in their thinking, understanding, application, and extension of the skills and concepts presented to them through their coursework. We tune our instruction and assessment to the work of Hess’s Rigor Matrix.

9.       We believe that our school’s competency education philosophy aligns perfectly with the competency based systems that colleges and universities are moving to. To that end, we believe that a competency education model is the best way to prepare our students for college and career.

10.   We believe that competency education is ultimately transformed not by the way we report grades or how we build assessments but rather by how we approach instruction in the classroom. Our classroom teachers recognize that quality instruction engages all learners each and every day.


This article was written originally for Competency Works

How My Understanding of Competency Based Education Has Changed Over the Years
by Brian Stack • December 14, 2015 • 0 Comments

Each day as I interact with our teachers and our students, I am reminded to what extent our decision to move to a competency based model has positively influenced our school’s culture and climate, and our philosophy about learning. Today we are graduating students who have never known any other educational system than the one I described above. We spend a great deal of time with our new staff each fall indoctrinating them with our beliefs about teaching and learning. Each day I see small victories from our work that range from students who are being held to higher standards to teacher teams who continue to advance their own understanding and application of the competency education philosophy. I challenge you to ask any of my teachers if they could ever go back to a traditional mindset and I can assure you that you won’t find one who would. We have truly transformed our professional culture into one focused on student learning

Next week, I am excited to be sharing the work that my team and I have done in New Hampshire on competency based education with a group of South Carolina educators as part of the Transform SC institute on Meeting the Needs of Every Student With Competency Based Progression. My preparation for this institute has been an opportunity for me to reflect on what has now been a six year journey with competency education with Sanborn Regional High School in Kingston, NH. This past week, our school district was recognized for the second year in a row as a “leader in competency education” by Tom Vander Ark’s organization Getting Smart, noting that Sanborn was one of 30 School Districts Worth Visiting in 2015.

Throughout my journey as a building principal navigating the uncharted waters of a new competency education model, I have shared my thoughts, my reflections, and my research through articles on Competency Works. It has been three years since I wrote one of my first articles entitled Five Things That Changed At My School When We Adopted Competencies. I am often asked how my views of competency education have evolved during my tenure at Sanborn. In that 2012 article, I talked about how our school community decided to “jump into the deep end of the pool” of high school redesign in an effort to provide a better learning experience for our students with a new competency based education model. I noted some big changes for our school community that, at the time, was in its second year of implementation of a competency education model that was adopted by our entire K-12 district. We were a school who was still very much in transition from an old way of thinking to a new one. We were leveraging our grading and reporting structures to ultimately help us transform instruction at the classroom model. Over the years, our understanding of competency education has deepened. We continue to learn more about ourselves each day through our work with our students and each other as professionals. When visitors come to our school and talk with our teachers and our students, here is what they often tell me they take away from their visit.




Tuesday, December 1, 2015

Classroom Practices That Boost – and Dampen – Student Agency

From Marshall Memo:

1. Classroom Practices That Boost – and Dampen – Student Agency

            In this paper from Harvard’s Achievement Gap Initiative, Ronald Ferguson, Sarah Phillips, Jacob Rowley, and Jocelyn Friedlander report on their study of the ways in which grade 6-9 teachers in 490 schools influenced their students’ non-cognitive skills. The central variable that Ferguson and his colleagues measured was students’ agency. This, they write, “is the capacity and propensity to take purposeful initiative – the opposite of helplessness. Young people with high levels of agency do not respond passively to their circumstances; they tend to seek meaning and act with purpose to achieve the conditions they desire in their own and others’ lives. The development of agency may be as important an outcome of schooling as the skills we measure with standardized testing.”

            The researchers used data from Tripod surveys of students’ perceptions of their teachers [see Marshall Memo 461] to examine how Ferguson’s “Seven C” components of instruction (caring, conferring, captivating, clarifying, consolidating, challenging, and managing the classroom) influenced agency, which manifested itself in the following ways:
-              Punctuality – The student tries hard to arrive to class on time.
-              Good conduct – The student is cooperative, respectful, and on task.
-              Effort – The student pushes him- or herself to do the best quality work.
-              Help-seeking – The student is not shy about asking for help when needed.
-              Conscientiousness – The student is developing a commitment to produce quality work.
-              Happiness – The student regards the classroom as a happy place to be.
-              Anger – The student experiences this in class, which may boost or dampen agency.
-              Mastery orientation – The student is committed to mastering lessons in the class.
-              Sense of efficacy – The student believes he or she can be successful in the class.
-              Satisfaction – The student is satisfied with what he or she has achieved in the class
-              Growth mindset – The student is learning to believe that he or she can get smarter.
-              Future orientation – The student is becoming more focused on future aspirations (e.g., college).

The researchers also identified a number of disengagement behaviors – the opposite of agency: faking effort, generally not trying, giving up if the work is too hard, and avoiding help.
What did the data reveal? Ferguson and his colleagues found that some teaching behaviors were agency boosters and others were agency dampers, indicating the delicate balance teachers must maintain between what they ask of students (academic and behavioral press) and what they give students (social and academic support). 

The details:

• Agency boosters – Requiring rigor came through strongly in the study – asking students to think more rigorously by striving to understand concepts, not simply memorize facts, or to explain their reasoning. This boosts mastery orientation, increases effort, growth mindset, conscientiousness, and future aspirations – but sometimes diminishes students’ happiness in class, feelings of efficacy, and satisfaction with what they’ve achieved. “These slightly dampened emotions in the short term,” say the researchers, “seem small prices to pay for the motivational, mindset, and behavioral payoffs we predict to result from requiring rigorous thinking. Combinations of teaching practices – for example, appropriately differentiated assignments, lucid explanations of new material, and curricular supports to accompany demands for rigor – seem quite relevant in this context.”

• Agency dampers – Caring may sometimes entail coddling: “in an effort to be emotionally supportive,” say the authors, “some teachers may be especially accommodating and this may depress student conduct as well as academic persistence.” Conferring can sometimes lack a clear purpose, which can undermine student effort and reduce time on task. Clearing up confusion can occur too automatically, with teachers doing the work for students and denying them the incentive and opportunity to diagnose and correct their own misunderstandings, which diminishes effort and conscientiousness.
            
• Future-orientation boosters – Caring and captivating are the teaching components most closely associated with college aspirations, the researchers found.
            
• Achievement boosters – Challenge and classroom management are the components correlated with students doing well on standardized tests, as the Measures of Effective Teaching study found.
            
“The point is not that there is a trade-off between annual learning gains and higher aspirations,” say Ferguson and colleagues. “Instead, the point is that the most important agency boosters for each are different. A balanced approach to instructional improvement will prioritize care and captivate to bolster aspirations, and challenge and classroom management to strengthen the skills that standardized tests measure. Certainly, without the skills that tests measure, college aspirations might be futile. But in turn, without college aspirations, the payoffs to those skills may be limited.”
Here is their distillation of ten classroom practices that develop agency:
-              Care – Be attentive and sensitive, but avoid codding students in ways that hold them to lower standards of effort and performance.
-              Confer – Encourage and respect students’ perspectives and honor student voice, but do so while remaining focused on instructional goals – and don’t waste class time with idle chatter.
-              Captivate – Make lessons stimulating and relevant while knowing that some students may hide their interest.
-              Clarify with lucid explanations – Strive to develop clearer explanations, including how the skills and knowledge you teach are useful in the exercise of effective agency in real life – especially for the material students find most difficult.
-              Clarify by clearing up confusion – Take regular steps to detect and respond to confusion in class, but do so in ways that share responsibility with students.
-              Clarify with instructive feedback – Give instructive feedback in ways that provide scaffolding for students to solve their own problems.
-              Consolidate – Regularly summarize lessons to help consolidate learning.
-              Challenge by requiring rigor – Press students to think deeply instead of superficially about what they are learning. Anticipate some resistance from students who might prefer a less-stressful approach – but be tenacious.
-              Challenge by requiring persistence – Consistently require students to keep trying and searching for ways to succeed even when work is difficult.
-              Classroom management – Achieve respectful, orderly, and on-task student behavior by using clarity, captivation, and challenge instead of coercion.

“The Influence of Teaching: Beyond Standardized Test Scores: Engagement, Mindsets, and Agency – A Study of 16,000 Sixth Through Ninth-Grade Classrooms” by Ronald Ferguson with Sarah Phillips, Jacob Rowley, and Jocelyn Friedlander, a paper from The Achievement Gap Initiative at Harvard University, Oct. 2015, http://www.agi.harvard.edu/publications.php
Back to page one


Tuesday, November 24, 2015

Fix, Don’t Discard MCAS/PARCC



This fall I had one on one conversations with many of our state's leaders and experts on the misplaced opposition to testing in general caused by some real legitimate mistakes in current public policy. Below is my analysis in the form of an argument.

Whereas:

  1. As enacted in 1993, the Massachusetts Comprehensive Assessment System (MCAS) was initially intended to support a comprehensive portfolio of state and local integrated assessment types. Whether the Board picks Measured Progress or a PARCC-developed test to serve as the G3-8 + G10 state summative assessment, the basic structure and name of MCAS remains the same.  PARCC is a more modern, computer delivered assessment. PARCC items are more challenging and College and Career aligned than MCAS incorporating the higher order thinking skills testing opponents purport to value.

  1. There is a growing opposition to state summative assessments such as MCAS/PARCC due in part to the fact that the current structure fails to meet dual expectations for accountability and instruction.  In addition, many parents and teachers correctly object to the loss in instructional time and unfair use of assessment results to put down schools and districts serving students in poverty.

  1. With that said, State summative assessments such as MCAS and PARCC are essential to school and district accountability; focus school resources on student proficiency and growth; and establish the imperative for state involvement in districts like Lawrence and Holyoke. Elizabeth Warren, the Mass Business Alliance for Education, and the Rennie Center all agree that the MCAS-accountability system must be improved, not discarded.

  1. Urban educators and families correctly object to the use of status data instead of growth to hold schools and districts accountable.  CPI and other metrics that consider status (scaled score or % proficient) unfairly bias public opinion and housing patterns away from schools that serve students in poverty, although those schools are frequently better in terms of SGP than schools serving fewer students in poverty.  Professor Jack Schneider of Holy Cross School of Education has written about the negative impact this has on the society and the importance of utilizing multiple factors before drawing valid insights from the data. Damian Betebenner, the originator of SGP echoes this approach.

  1. While SGP is valid and essential in school and district accountability it is mostly a distraction in teacher accountability.  Fewer than 1 in 5 teachers can have SGP calculated for their grade and subject, the SGP data is very “noisy” at that low N, and it fails to account for the systemic approach (after school tutors etc) that are critical, but out of the direct control of the teacher.

  1. MCAS does not deliver timely nor instructionally significant results.  Spring testing does little to prepare teachers for fall instruction.  Summer mobility and cognitive regression means that the students in front of a teacher are not well measured by last spring’s exam.  The item distribution is insufficient to create skill-level profiles for students, but much more than needed through sampling for school and district level results.

  1. The overreaction to MCAS have resulted in an extended test administration that typically runs for 3 weeks to administer a test that should take 1-2 days, disrupting instruction and schools go into “lock down” mentality.  Some schools overreact in test preparation as well, although the vast majority of that work is exactly what the students most vulnerable students need.

Therefore:  Part of the solution is to split MCAS in two distinct components.  Whatever direction the state takes with regards to PARCC, the policies surrounding assessment need to acknowledge that the same test cannot practically be used for both accountability and instruction.  The Legislature should direct DESE to implement a greatly abbreviated spring assessment used exclusively to generate SGP for school and district accountability and a fall, locally-administered, state-coordinated assessment designed to produce skill-profiles for each student to inform instruction.

Students change over the summer.  New kids enroll.  Many kids regress, some accelerate.  Differentiated instruction and personalized learning requires a detailed understanding of each learner's mastery of generally accepted skills.  The current MCAS and proposed PARCC assessment design does not produce skill-level reports, nor is the information timely, nor complete for the students assigned to each teacher.  Teachers need and families deserve educational support for the “zone of proximal learning” not just the middle of the group.  Embedded diagnostic assessment tools like Kahn Academy, 10 Marks, IXL, ALEKS, DreamBox, etc provide teachers with the information they need to differentiate instruction without endless additional grading and disrupting students from time on task learning.

It is time for the Commonwealth to lead again.  The current debate between a growing extremists movement from both the left and right and a moderate middle that wants to retain common sense high and specific expectations for all students is a destructive waste of energy.  We are better than that.  Again and again Massachusetts has led the Nation with education innovation.  We need to fix MCAS, reduce testing time, stop penalizing urban districts for serving poor students, and focus on giving teachers and families the tools they need to ensure every child reads by 3rd grade, every middle school graduate is competent in Algebra and proficient in writing, and every high school student graduates with the core STEM, ELA and life skills they need to ready for higher education and careers.  Let’s end the destructive debate and get to work.

Thursday, November 12, 2015

AERA Growth for Teacher Eval

The American Educational Research Association is warning schools against using value-added scores when they make high-stakes decisions about teachers.

Value-added scores aim to measure the impact a teacher or teacher preparation program has on student achievement. But from a research perspective, it's very difficult to successfully isolate teachers and their training programs from the myriad other factors that play into how students perform on tests, AERA said in a new policy statement.

In fact, the conditions needed to make VAM scores accurate can't be met in many cases, according to the statement.


"This statement draws on the leading testing, statistical, and methodological expertise in the field of education research and related sciences, and on the highest standards that guide education research and its applications in policy and practice," said AERA Executive Director Felice J. Levine. 

Monday, November 2, 2015

Houston Badges

So You Want to Drive Instruction With Digital Badges? Start With the Teachers

204 Shares
Terry GrierTerry Grier 
Oct 31, 2015
You can’t have a conversation about the future of public education these days without some mention of digital learning. And when you talk about digital learning, the discussion often turns to badging.

The concept is simple: individuals earn badges for demonstrating the acquisition of key knowledge and skills. Think Girl Scouts. When you marry the concept of badging with technology, you get digital badges that allow a person’s portfolio of badges to be stored in one place and provide a record of subject or skill mastery. This could have a significant impact on awarding credentials or certificates to students, and perhaps even creating an implementation framework for competency-based learning.

This could have a significant impact on awarding credentials or certificates to students, and perhaps even creating an implementation framework for competency-based learning.
While badging for students shows real promise, a partnership between the Houston Independent School District (HISD) and VIF International Educationdemonstrates that in the short run the best approach to scaling digital badging is not to focus on students, but on their teachers.

Beginning this past fall, HISD launched a global learning initiative in 28 elementary schools. The district will expand the program to a total of 51 elementary schools for the 2015-16 school year. To ensure program quality for our students, we partnered with VIF to provide our teachers with globally themed online professional development and a customized digital badging system. Within the professional development platform, they also have access to curricular resources and a community of fellow educators to spur and support collaborative projects and innovative approaches. But the core of the system is the badging approach to professional development.

Participating teachers advance through a series of inquiry-based professional development modules. Teachers are awarded a digital badge for the successful completion of each 10-hour module. To accomplish this, they must complete the following steps: 1) study module content, 2) participate in a focused discussion with peers working on the same module, 3) create an original inquiry-based global lesson plan that incorporates new learning, 4) implement the original lesson plan in the classroom, 5) provide evidence of classroom implementation and 6) reflect on and revise the lesson created.

The final product of every module is a tested, global lesson plan that articulates learning objectives, activities, assessments, and resources for each stage of inquiry. Upon completion, teachers may publish finalized lessons in a resource library where they can be accessed by other educators. As designed, the HISD badging system will be a four-year, 16-badge approach that equates to 160 hours of professional learning for teachers.

Like other web-based professional development, the HISD badging system provides flexibility for HISD teachers to access the modules online at any time and place and to complete them at their own pace. This flexibility is critical to help teachers balance their everyday demands with the expectation to build new expertise in content, pedagogy and new technologies.

What makes the digital badging system different from more traditional forms of professional development are five key features that taken together increase significantly the likelihood that the learning experience for a teacher will lead to results in the classroom for students — which, after all, is the point of professional development. The five features:

Badging requires demonstrating understanding and implementation of a target content or skill. To complete a module successfully requires more than just moving through the content. Teachers must learn it; confer with peers; develop, implement and show evidence of a lesson plan using it; and reflect on the experience.
Badging provides recognition and motivation. Badges represent both tangible and public symbols of both demonstrated learning, as well as the knowledge and skills that a teacher has yet to develop. They create a recognizable pathway to demonstrating proficiency that teachers can understand and own.
Badging allows for knowledge circulation among teachers. By requiring the development of lesson plans and evidence of implementation, digital badging systems create instructional materials that teachers can share and build from with each other. Digital badges accumulate in a teacher’s online profile, can be shared via social media, and acknowledged by schools, districts and states.
Badging can be tracked and assessed. The HISD system provides tailored reports on the progress of teachers through the badging process. This function allows principals and district instructional support personnel to not only track the completion of badges and review developed materials, but to assess the impact of the modules on teacher and student learning.
Badging is a scalable enterprise. Once the modules and overall pathways are set, teachers can be added at whatever scale the district wants. The online platform scales to whatever number of teachers the district seeks to involve.
That portfolio is portable. It remains with them whether they remain in the same school, move to another school within HISD, or to another district altogether.
For teachers, digital badges could have use value beyond their work in HISD. It allows them to build a badging portfolio that reflects the skills and knowledge they have developed, as well as evidence of classroom impact. That portfolio is portable. It remains with them whether they remain in the same school, move to another school within HISD, or to another district altogether.

For school and district leaders, the badging system creates a platform for at least two future endeavors. First, personalizing professional development pathways with modules and badges reflect an individual teacher’s learning needs. Second, it develops a career advancement system based on demonstrated expertise through badging.

The HISD-VIF digital badging system for teachers offers a professional development experience that teachers have been seeking: one that is flexible, job-embedded, and collaborative, and provides actionable strategies for use in the classroom. It is like wheels on luggage. You are left wondering why it took so long to put this system in place. 

Dr. Terry Grier is the superintendent of the Houston Independent School District.

Thursday, October 29, 2015

SUNY Offers Badges

The State University of New York will soon offer "micro-credentials" to more students, chancellor Nancy Zimpher will announce Thursday at SUNYCON in Manhattan.

Micro-credentials, also known as "badges," are digital documents that demonstrate a student has a specific competency. SUNY piloted the program at Stony Brook University, which offers badges to education and business majors with descriptions such as Investment Analysis, Diverse Literatures and Teaching Students with Special Needs.

The program will soon expand to more SUNY campuses and to more majors, Zimper will announce on Thursday. SUNY will form a task force made up of of faculty, administrators and workforce experts to plan for the systemwide expansion.

Alexander Cartwright, SUNY provost and executive vice chancellor, said the badges will help prepare liberal arts graduates to demonstate their employability.

"For a lot of people who have liberal arts degrees, long-term they do incredibly well because they have such a rich skill set that they learned in college - about learning, about logic, about arguing, about how you actually have a good life," Cartwright said in a phone interview. "Where they struggle a little bit is getting that first job."

Cartwright also said the badges might help encourage other students to graduate, thereby improving SUNY's completion rates.


"It has to do with whether [students] believe they can complete their degree or not," he said. "So, can we give them something that says you've demonstrated competency in a specific area that gives you a qualification that is on the path to something much bigger?"

Ranking States Controlling for Poverty

Sunday, August 23, 2015

Minority families divided on Common Core, testing

8/23/15 7:01 PM EDT
Black parents, Hispanic parents and white parents are divided on some of the most contentious issues in education, including the Common Core and standardized testing, according to the 47th annual PDK/Gallup poll released today.
While a majority of public school parents overall oppose the Common Core, black and Hispanic families were more likely to support the standards. (This is the first time the PDK/Gallup poll broke down responses by demographics.) The appetite for higher academic standards is there, however: Parents named academic standards as one of the five biggest problems facing their communities.
Overall, standardized testing lacks public support, with a majority of parents across the board saying that there’s too much of it. About 44 percent of white parents said they should be allowed to opt their children out of the tests, along with 35 percent of Hispanic parents and 28 percent of black parents.
“Communities of color tend to see the standardized tests as more valuable,” PDK International CEO Joshua Starr said.  “There are a lot of factors involved with that.”
He said urban and under-resourced schools might see the tests as more important, but he said he’s hesitant to draw conclusions from the demographic differences. He said the data is something PDK hopes to further unpack. 
Recent data on record-high opt outs in New York state showed that students who skipped the tests were more likely to be white and from areas with low to moderate needs.
Sixty-five percent of public school parents overall said they wouldn’t excuse their own children from standardized tests. Broken down by demographics, three-quarters of black parents said they wouldn’t excuse their children, compared to 65 percent of Hispanic parents and 54 percent of white parents.
— Caitlin Emma



Click here to report this email as spam.

Friday, July 31, 2015

Personalized Learning

Key Takeaways
  • Personalized learning should be defined as the right educational approach for the right student at the right time.

  • The growth of personalized learning is highly dependent on the capacity of institutions to improve the collection and analysis of learning data.

  • True personalization of online learning will create an educational process similar to what tutoring provides now.

Improved Analytics Critical to the Personalization of Online Learning


Improved data collection and analysis is critical to the expansion of personalized learning in higher education, which itself is central to the move towards a more hybrid and online postsecondary environment.

The evolution of technology and technological tools over recent years has positively impacted the effectiveness of online learning, which has transformed into a highly engaging, highly integrated platform for students to pursue postsecondary credentials with maximum flexibility. Of course, as with any technology, there is still room for improvement and growth. Online learning has the space to become even more personalized. In this interview, Michael Horn discusses the current state of personalization in the online learning space and shares his thoughts on what the future might hold for online education.


The EvoLLLution (Evo): How truly personalized is online programming today?
Michael Horn (MH): Online learning today is personalized in the sense that it starts to give students control over the pace of their learning and the time when it occurs. It can offer much more flexibility given the asynchronous technologies.
Where there is still a lack of personalization is in the different pathways that students take towards mastery. Certain programs are certainly addressing this and we’re seeing adaptive learning engines like Knewton appear to do some exciting things to better target and personalize for different students. It still feels like we’re really in the early beginnings of the dramatic revolution that we’ve seen in a lot of other technology sectors where really smart recommendation engines come in and assist the student in picking and choosing their unique path.
Evo: What are the most significant limits on the amount of personalization and adaptability that can be introduced into an online course?
MH: In order to really go towards adaptive learning, you need huge numbers of students on your platform and there aren’t a lot of platforms that have that. If you think about it, the ability of Google to personalize advertisements for you, or Amazon to personalize shopping recommendations or Netflix to personalize movies, those are still relatively rudimentary themselves. There’s a limit to these engines and when you talk about learning. What’s exciting is there are potentially a lot more data points available. Every few minutes you can be having interactions that help you understand what a student does and doesn’t understand.
It’s a more complex problem to collect all that data and there are many more variables affecting it. There are also a lot of policies and regulations in place that potentially prohibit the data that we can use to improve this problem quickly. These regulations can inhibit what data we can collect and how we can use it to create the best learning experience at the right time for students.
Evo: Why is personalization of programming so important in terms of student success and outcomes?
MH: It’s related to the value tutoring has to the learning experience. There’s a great deal of evidence that tutoring is actually the best learning opportunity. A tutor can constantly see where a student lacks a certain understanding, or doesn’t quite have the background knowledge about something, and then tweak and tailor the approach and try different things to personalize it for that student. The fundamental insight is that learning is really based on a few things. One is that people are motivated by and passionate about different things. Secondly, we all have different amounts of knowledge that we can manipulate in active memory. Additionally, we all have different levels of background knowledge when we enter a learning experience.
Personalization along those dimensions is critical to unlocking student success.
Evo: To your mind, what might personalization of online courses look like in 20 years?
MH: What we’re going to learn over time is to get much more specific about what sorts of differences there are between learners and which ones have the most impact on learning and learning outcomes.
Right now people are unsure how much adaptive capacity we want in our learning compared with student control and agency. In the future—much as with Google or Amazon where the user has a lot of control but the engine is also automating and making suggestions to enhance that control—you’re going to see similar marriages in learning. You’re going to see a range of approaches for students, where some students go through game-based learning where they’re going through some really exciting simulations or games to master something, and other students they’ll just read a text because for them, they have the background knowledge to access it and it will be a more efficient way to learn.
Evo: What are the biggest roadblocks to realizing this vision of the future?
MH: We need platforms that can collect the data we need and can make better use of data so that we can figure out different ways to serve different learners.  We also need to pay a lot of attention to the learning models themselves. If we use an adaptive platform like Knewton in a traditional classroom, it actually won’t be that useful because the teacher and students are not going at different paces. We need to really shift the learning models themselves, put students at the center and change the way they interact with educators. Competency-based learning is a really important ingredient to this and together, these will be the things that need to fall into place for this to really have the impact that it could.
Evo: As personalization and online courses evolves and grows, do you see the possibility of a higher ed environment emerging where there is no “traditional higher education” left but instead a range of hybrid models?
MH: That’s exactly right. Online learning serves certain people well but if you imagine online learning more as a platform that helps students and teachers find the right path forward in any given subject—whether that’s offline or online—then you can imagine this pervading every single learning experience in the future.
The only places where it might not take root are in those truly specialized subjects for which there are only a few students and teachers capable or interested in studying.
Evo: Is there anything else you’d like to add about the transformation and evolution of personalized online learning and what it will take to create the sort of learning landscape that we’ve been talking about today?
MH: Certain people have cast big doubts in the last couple years on the wisdom of personalized learning. One of the reasons you see pushback on that notion is because there are actually many different definitions for what personalized learning even is.
I have a relatively simple one: it’s the right approach for the right student at the right time. Nothing more nothing less. If people step back from it a little bit and see it a little more simply, it’s easier to understand the power that personalized learning can and should have for all students and realize that it’s something that we would all want in our learning.
This interview has been edited for length.

Sunday, June 14, 2015

This is what I am looking for

Now imagine what this might look like in practice. Students come to school and learn through a variety of face-to-face and online activities. As they learn, they are given opportunities to practice and demonstrate their learning and receive feedback on an ongoing basis. When they complete learning activities that require them to use basic factual or procedural knowledge, software evaluates their performance and provides immediate feedback. When they complete learning activities that require deeper levels of understanding, analysis, and critical thinking, the learning platform captures their performance (in video, audio, written, or other formats) and immediately sends it to expert graders who score their work and provide feedback to help the students improveme. Then, as students progress through the platform’s learning activities, the results from both the machine-graded and human-graded standardized assessment items are incorporated to create a complete and robust picture of the students’ mastery of learning standards.

From Thomas Arnett June 12, 2015 post on The key to rigorous online assessments at Christensen Institute

- See more at: http://www.christenseninstitute.org/the-key-to-rigorous-online-assessments/#sthash.fGgg3fIb.dpuf

Sunday, March 15, 2015

A History of the Urban Dashboard

Mission Control: A History of the Urban Dashboard



NASA Mission Control Center
Mission Control Center, Houston, 1965. [NASA]
We know what rocket science looks like in the movies: a windowless bunker filled with blinking consoles, swivel chairs, and shirt-sleeved men in headsets nonchalantly relaying updates from “Houston” to outer space. Lately, that vision of Mission Control has taken over City Hall. NASA meets Copacabana, proclaimed the New York Times, hailing Rio de Janeiro’s Operations Centeras a “potentially lucrative experiment that could shape the future of cities around the world.” The Times photographed an IBM executive in front of a seemingly endless wall of screens integrating data from 30 city agencies, including transit video, rainfall patterns, crime statistics, car accidents, power failures, and more. 1
Futuristic control rooms have proliferated in dozens of global cities. Baltimore has its CitiStat Room, where department heads stand at a podium before a wall of screens and account for their units’ performance. 2 The Mayor’s office in London’s City Hall features a 4×3 array of iPads mounted in a wooden panel, which seems an almost parodic, Terry Gilliam-esque take on the Brazilian Ops Center. Meanwhile, British Prime Minister David Cameron commissioned an iPad app – the “No. 10 Dashboard” (a reference to his residence at 10 Downing Street) – which gives him access to financial, housing, employment, and public opinion data. As The Guardian reported, “the prime minister said that he could run government remotely from his smartphone.” 3
Rio Operations Center
Rio Operations Center, 2012. [IBM]
This is the age of Dashboard Governance, heralded by gurus like Stephen Few, founder of the “visual business intelligence” and “sensemaking” consultancy Perceptual Edge, who defines the dashboard as a “visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance.” A well-designed dashboard, he says — one that makes proper use of bullet graphs, sparklines, and other visualization techniques informed by the “brain science” of aesthetics and cognition — can afford its users not only a perceptual edge, but a performance edge, too. 4 The ideal display offers a big-picture view of what is happening in real time, along with information on historical trends, so that users can divine the how and whyand redirect future action. As David Nettleton emphasizes, the dashboard’s utility extends beyond monitoring “the current situation”; it also “allows a manager to … make provisions, and take appropriate actions.” 5
Juice Software, KnowNow, Rapt … the names conjured up visions of an Omniscient Singularity fueled by data, hubris, and Adderall.
In 2006, when Few published the first edition of his Information Dashboard Design manual, folks were just starting to recognize the potential of situated media. Design critic John Thackara foretold an emerging market for “global spreadsheets” (his term for data displays) that could monitor the energy use of individual buildings or the ecological footprint of entire cities and regions. Thackara identified a host of dashboard players already on the scene — companies like Juice Software, KnowNow, Rapt, Arzoon, ClosedloopSolutions, SeeBeyond, and CrossWorlds — whose names conjured up visions of an Omniscient Singularity fueled by data, hubris, and Adderall. 6
By now we know to interpret the branding conceits of tech startups with amused skepticism, but those names reflect a recognition that dashboard designers are in the business of translating perception into performance, epistemology into ontology. 7They don’t merely seek to display information about a system but to generate insights that human analysts use to changethat system — to render it more efficient or sustainable or profitable, depending upon whatever qualities are valued. The prevalence and accessibility of data are changing the way we see our cities, in ways that we can see more clearly when we examine the history of the urban dashboard.
Bloomberg Terminal, 2009. [Ryuzo Masunaga/Bloomberg]
Bloomberg Terminal, 2009. [Ryuzo Masunaga/Bloomberg]

From Bloomberg Terminals to Bloomberg’s New York

Data displays often mimic the dashboard instrumentation of cars or airplanes. Where in a car you’d find indicators for speed, oil, and fuel levels, here you’ll find widgets representing your business’s “key performance indicators”: cash flow, stocks, inventory, and so forth. Bloomberg terminals, which debuted in 1982, allowed finance professionals to customize their multi-screen displays with windows offering real-time and historical data regarding equities, fixed-income securities, and derivatives, along with financial news feeds and current events (because social uprisings and natural disasters have economic consequences, too), and messaging windows, where traders could provide context for the data scrolling across their screens. Over the last three decades, the terminals have increased in complexity. As in a flight cockpit, the Bloomberg systems involve custom input devices: a specialized keyboard with color-coded keys for various kinds of shares, securities, markets, and indices; and the B-UNIT® portable scanner that can biometrically authenticate users on any computer or mobile device. The Bloomberg dashboard is no longer locked into the iconic two-screen display; traders can now access the dashboard “environment” on a variety of devices, just as David Cameron can presumably govern a nation via BlackBerry.
The Enron scandal incited a cultural shift … Chief Information Officers finally embraced the dashboard’s panoptic view.
The widespread adoption of the Bloomberg terminal notwithstanding, it took a while for dashboards to catch on in the corporate world. Stephen Few reports that during much of the ’80s and ’90s, large companies focused on amassing data, without carefully considering which indicators were meaningful or how they should be analyzed. He argues that the 2001 Enron scandal incited a cultural shift. Recognizing the role of data in corporate accountability and ethics, the Chief Information Officers of major companies finally embraced the dashboard’s panoptic view. I’d add another reason: before dashboards could diffuse into the zeitgeist, we needed a recognized field of data science and a cultural receptivity to data-driven methodologies and modes of assessment.
The dashboard market now extends far beyond the corporate world. In 1994, New York City police commissioner William Bratton adapted former officer Jack Maple’s analog crime maps to create the CompStat model of aggregating and mapping crime statistics. Around the same time, the administrators of Charlotte, North Carolina, borrowed a business idea — Robert Kaplan’s and David Norton’s “total quality management” strategy known as the “Balanced Scorecard” — and began tracking performance in five “focus areas” defined by the City Council: housing and neighborhood development, community safety, transportation, economic development, and the environment. Atlanta followed Charlotte’s example in creating its own city dashboard. 8
NYPD Real Time Crime Center
Real Time Crime Center, New York City. [via NYC Police Foundation]
In 1999, Baltimore mayor Martin O’Malley, confronting a crippling crime rate and high taxes, designed CitiStat, “an internal process of using metrics to create accountability within his government.” (This rhetoric of data-tested internal “accountability” is prevalent in early dashboard development efforts.) 9 The project turned to face the public in 2003, when Baltimore launched a website of city operational statistics, which inspired DCStat (2005), Maryland’s StateStat (2007), and NYCStat (2008). 10 Since then, myriad other states and metro areas — driven by a “new managerialist” approach to urban governance, committed to “benchmarking” their performance against other regions, and obligated to demonstrate compliance with sustainability agendas — have developed their own dashboards. 11
The Open Michigan Mi Dashboard is typical of these efforts. The state website presents data on education, health and wellness, infrastructure, “talent” (employment, innovation), public safety, energy and environment, financial health, and seniors. You (or “Mi”) can monitor the state’s performance through a side-by-side comparison of “prior” and “current” data, punctuated with a thumbs-up or thumbs-down icon indicating the state’s “progress” on each metric. Another click reveals a graph of annual trends and a citation for the data source, but little detail about how the data are actually derived. How the public is supposed to use this information is an open question.
OpenMi Dashboard
Mi Dashboard. [Open Michigan]
Some early dashboard projects have already been abandoned, and others have gone on hiatus while they await technical upgrades. The now-dormantLIVE Singapore! project, a collaboration of MIT’s Senseable City Lab and the Singapore-MIT Alliance for Research and Technology (SMART), was intended to be an “open platform” for the collection, combination, and distribution of real-time data, and a “toolbox” that developer communities could use to build their own civic applications. 12 The rise of smartphones and apps has influenced a new wave of projects that seek not just to visualize data but to give us something to do with it, or layer on top of it.
Over the past several years, a group of European cities has been collaborating on the development of urbanAPI, which proposes to help planners engage citizens in making decisions about urban development. Boston’s Citizens Connect has more modest aspirations: it allows residents to report potholes, damaged signs, and graffiti. Many projects have scaled back their “built-in” civic engagement aspirations even further. Citizens’ agency is limited to accessing data, perhaps customizing the dashboard interface and thereby determining which sources are prioritized, and supplying some of that data passively (often unwittingly) via their mobile devices or social media participation. If third parties wish to use the data represented on these platforms in order to develop their own applications, they’re free to do so — but the platforms themselves involve few, if any, active participation features.
In 2012, London launched an “alpha” prototype of the City Dashboard that powers the mayor’s wall of iPads. 13 Created by the Bartlett Centre for Advanced Spatial Analysis at University College London, and funded by the government through the National e-Infrastructure for Social Simulation, the web-based platform features live information on weather, air quality, train status, and surface transit congestion, as well as local news. 14 Data provided by city agencies are supplemented by CASA’s own sensors (and, presumably, by London’s vast network of CCTV cameras). In aggregate, these sources are meant to convey the “pulse” of London. Other urban cadences are incorporated via social media trends, including tweets from city media outlets and universities, along with a “happiness index” based on an “affect analysis” of London’s social media users. 15 The CASA platform has also been deployed in other UK cities, from Glasgow to Brighton.
City Dashboard, London
City Dashboard, London. [Bartlett Centre for Advanced Spatial Analysis]
By now these dashboard launches are so common that we begin to see patterns. Dublin’s dashboard, released just last fall by the Programmable Cityproject and the All-Island Research Observatory at Maynooth University, integrates data from numerous sources — Dublin City Council, the regional data-sharing initiative Dublinked, the Central Statistics Office, Eurostat, and various government departments — and presents it via real-time and historical data visualizations and interactive maps. The platform is intended to help its audiences — citizens, public employees, and businesses — with their own decision-making and “evidence-informed analysis,” and to encourage the independent development of visualizations and applications. 16
Urban dashboard projects embody a variety of competing ideologies.
Such projects embody a variety of competing ideologies. They open up data to public consumption and use. They render a city’s infrastructures visible and make tangible, or in some way comprehensible, various hard-to-grasp aspects of urban quality-of-life, including environmental metrics and, in the case of the happiness index, perhaps even mental health. Yet at the same time these platforms often cultivate a top-down, technocratic vision that, as Paola Ciuccarelli and colleagues argue, “can be problematic, especially if matters such as the active engagement of all the stakeholders involved in designing, operating, and controlling these dashboards are not properly addressed.” 17What’s more, these urban dashboards perpetuate the fetishization of data as a “monetizable” resource and a positivist epistemological unit — and they run the risk of framing the city as a mere aggregate of variables that can be measured and “optimized” to produce an efficient or normative system. 18
John Nott Sartorius, <em>A Horse and Carriage in a Landscape</em>.
John Nott Sartorius, A Horse and Carriage in a Landscape.

A History of Cockpits and Control

The dashboard as “frame” — of human agency, of epistemologies and ideologies, of the entities or systems it operationalizes through its various indicators — has a history that extends back much farther than ’80s-era stock brokerage desks and ’90s crime maps. Likewise, the dashboard’s relation to the city and the region — to space in general — predates this century’s interactive maps and apps. The term dashboard, first used in 1846, originally referred to the board or leather apron on the front of a vehicle that kept horse hooves and wheels from splashing mud into the interior. Only in 1990, according to the Oxford English Dictionary, did the term come to denote a “screen giving a graphical summary of various types of information, typically used to give an overview of (part of) a business organization.” The acknowledged partiality of the dashboard’s rendering might make us wonder what is bracketed out. Why, all the mud of course! All the dirty (un-“cleaned”) data, the variables that have nothing to do with key performance (however it’s defined), the parts that don’t lend themselves to quantification and visualization. All the insight that doesn’t accommodate tidy operationalization and air-tight widgetization: that’s what the dashboard screens out.
All the insight that doesn’t accommodate tidy operationalization and air-tight widgetization: that’s what the dashboard screens out.
Among the very pragmatic reasons that particular forces, resources, and variables have historically thwarted widgetization is that we simply lacked the means to regulate their use and measure them. The history of the dashboard, then, is simultaneously a history of precision measurement, statistics, instrument manufacturing, and engineering — electrical, mechanical, and particularly control engineering. 19 Consider the dashboard of the Model T Ford. In 1908, the standard package consisted solely of an ammater, an instrument that measured electrical current, although you could pay extra for a speedometer. You cranked the engine to start it (by 1919 you could pay more to add an electric starter), and once the engine was running, you turned the ignition switch from “battery” to “magneto.” There was no fuel gauge until 1909; before then, you dipped a stick in the fuel tank to test your levels. Water gushing from the radiator, an indicator you hoped not to see, was your “engine temperature warning system.” As new means of measurement emerged, new gauges and displays appeared.
Dashboard in an early Model T Ford. [Flickr/Commons]
The lone dashboard instrument in an early Model T Ford. [Flickr/Commons]
And then things began to evolve in the opposite direction: as more and more mechanical operations were automated, the dashboard evolved to relay their functioning symbolically, rather than indexically. By the mid-50s, the oil gauge on most models was replaced by a warning, or “idiot,” light. The driver needed only a binary signal: either (1) things are running smoothly; or (2) something’s wrong; panic! 20 The “Maintenance Required” light came to indicate a whole host of black-boxed measurements. The dashboard thus progressively simplified the information relayed to the driver, as much of the hard intellectual and physical labor of driving was now done by the car itself.
Dashboard design in today’s automobiles is driven primarily by aesthetics. It’s currently fashionable to give the driver lots of information — most of which has little impact on her driving behavior — so she feels in control of this powerful machine. Most “key performance indicators” have little to do with the driver’s relationship to the car. Just as important are her relationship to (1) the gas tank, (2) her Bluetooth-linked iPhone, and (3) the state trooper’s radar gun. 21 While some “high-performance” automobiles are designed to make drivers feel like they’re piloting a fighter jet, the dashboard drama is primarily for show. It serves both to market the car and to cultivate the identity and agency of the driver: this assemblage of displays requires a new literacy in the language and aesthetics of the interface, which constitutes its own form of symbolic, if not mechanical, mastery.
In an actual fighter jet, of course, all those gauges play a more essential operational role. As Frederick Teichmann wrote, in his 1942 Airplane Design Manual, “All control systems terminate in the cockpit; all operational and navigational instruments are located here; all decisions regarding the flight of the airplane, with … very few exceptions … are determined here.” 22 Up through the late ’20s or early ’30s, however, pilots had few instruments to consult. World War I pilots, according to Branden Hookway, were “expected to rely almost solely on unmediated visual data and ‘natural instinct’ for navigation, landing, and target sighting”; navigation depended on a mixture of “dead reckoning (estimating one’s position using log entries, compass, map, etc., in absence of observation) and pilotage (following known landmarks directly observed from the air).” 23 And while some instruments — altimeter, airspeed indicator, hand-bearing compass drift sight, course and direction calculator, and oil pressure and fuel gauges — had become available by the war’s end, they were often inaccurate and illegible, and most pilots continued to fly by instinct and direct sight.
North American F-100D cockpit
Cockpit of a North American F-100D jet fighter, 1956. [U.S. Air Force]
Throughout the 1920s, research funded by the military and by instrument manufacturers like Sperry sought to make “instrument flying” more viable. By 1928, Teichmann writes, pilots were flying faster, more complicated planes and could no longer “trust their own senses at high altitudes or in fogs or in cross-country flights or in blind flying”:
They must rely, for safety’s sake, almost entirely on radio communication, radio beacons, range compass findings, gyroscopic compasses, automatic pilots, turn and bank indicators, and at least twenty-five or more other dials and gadgets essential to the safe operation of the airplane in all kinds of weather. 24
In short, they came to depend on the dashboard for their survival. The instrumentation of piloting represented a new step in automation, according to Jones and Watson, authors of Digital Signal Processing. For the first time, automated processes began “replacing sensory and cognitive processes as well as manipulative processes.” 25 Dashboards manifested the “perceptual edge” of machines over their human operators.
Still, the dashboard and its user had to evolve in response to one another. The increasing complexity of the flight dashboard necessitated advanced training for pilots — particularly through new flight simulators — and new research on cockpit design. 26 Hookway argues that recognizing the cockpit-as-interface led to the systematized design of flight instrumentation that would streamline the flow of information. Meanwhile, recognizing the cockpit-as-environment meant that designers had to attend to the “physiological and psychological needs of pilot and aircrew,” which were shaped by the cramped quarters, noise, cold temperatures, and reduced atmospheric pressure of the plane. 27 Military applications also frequently required communication and coordination among pilots, co-pilots, navigators, bomb operators, and other crew members, each of whom relied on his own set of instruments. 28
Plotting table at RAF Uxbridge, headquarters of No. 11 Group.
Plotting table at RAF Uxbridge, headquarters of No. 11 Group. [Daniel Stirland]

The Control Room as Immersive Dashboard

Before long, the cockpit grew too large for the plane:
Phone lines linked controllers to the various airfields, which communicated with individual planes by high-frequency radio. A special red hotline went directly to Fighter Command headquarters at Bentley Priory. Plotters hovered around the situation map. … A vast electric tableau, glowing in a bewildering array of colored lights and numbers, spanned the wall opposite the viewing cabin like a movie curtain. On this totalizator, or tote board, controllers could see at a glance the pertinent operational details — latest weather, heights of the balloon barrage layer guarding key cities, and most especially, fighter status.
That was the Control Room of No. 11 Group of the RAF Fighter Command, at Uxbridge, England, in September 1940, as described by Robert Buderi in his book on the history of radar. 29 The increasing instrumentation of flight and other military operations, and the adoption of these instrumental control strategies by government and business, led to the creation of immersive environments of mosaic displays, switchboards, and dashboards — from Churchill’s War Rooms to the Space Age’s mythologized mission control.
The push-button changed the way we started our cars, summoned our servants, dialed our phones, manufactured our Space Sprockets, and waged our wars.
In the early 1970s, under Salvador Allende, Chile attempted to implement Project Cybersyn, a cybernetics-informed decision-support system for managing the nation’s economy. The hexagonal “Opsroom” was its intellectual and managerial hub, where leaders could access data, make decisions, and transmit advice to companies and financial institutions via telex. 30 Four of the room’s six walls offered space for “dashboards.” 31One featured four “datafeed” screens housed in fiberglass cabinets. Using a button console on their chair armrests, administrators could control which datafeed was displayed — graphs of production capacities, economic charts, photos of factories, and so forth. It was a proud moment for the humble push-button — that primary means of offering binary input into our dashboards — which, in the course of a century, changed the way we started our cars, summoned our servants, dialed our phones, manufactured our Space Sprockets, and (demonstrating its profound ethical implications) waged our wars. Media historian Till Heilmann, who is investigating the push-button as an integral element in the history of digital technology, argues that pushing buttons — a practice that he traces back to operation of the electric telegraph (but which might go back farther, to the design of musical instruments) — is among the most important “cultural techniques” of the industrial and post-industrial ages. 32
Cybersyn Ops Room
Cybersyn Ops Room, Chile, 1972. [Gui Bonsiepe]
Another of the Opsroom’s walls featured two screens with algedonic alerts: red lights that blinked with increasing frequency to reflect the escalating urgency of problems in the system. On yet another wall, Cybersyn architect Stafford Beers installed a display for his Variable System Model, which helped “participants remember the cybernetic principles that supposedly guided their decision-making processes.” 33 The final “data” wall featured a large metal surface, covered with fabric, on which users could rearrange magnetic icons that represented components of the economy. The magnets offered an explicit means of analog visualization and play, yet even the seemingly interactive “datafeed” screens were more analog than they appeared. Although the screens resembled flat-panel LCDs, they were actually illuminated from the rear by slide projectors behind the walls. The slides themselves were handmade and photographed. The room’s futuristic Gestalt — conveyed by those streamlined dashboards, with their implication of low-barrier-to-entry, push-button agency — was a fantasy. “Maintaining this [high-tech] illusion,” Eden Medina observes, “required a tremendous amount of human labor” behind the screens. 34
Screen interfaces embody in their architectures particular ways of thinking and particular power structures, which we must critically analyze.
Cybersyn’s lessons have filtered down through the years to inform the design of more recent control rooms. In a 2001 edited volume on control room design, various authors advocated for the simultaneous consideration of human-computer interaction and human cognition and ergonomics. They addressed the importance of discerning when it’s appropriate to display “raw” data sets and when to employ various forms of data visualization. They advocated for dashboarded environments designed to minimize human error, maximize users’ “situation awareness” and vigilance, facilitate teamwork, and cultivate “trust” between humans and machines. 35
We might read a particular ideology in the design of Baltimore’s CitiStat room, which forces department managers to stand before the data that are both literally and methodologically behind their operations. The stage direction reassures us that it is those officials’ job to tame the streams of data — to contextualize this information so that it can be marshaled as evidence of “progress.” The screen interfaces themselves — those “control rooms in a box,” we might say — embody in their architectures particular ways of thinking and particular power structures, which we must critically analyze if we’re using these structures as proxies for our urban operations. 36
Model T race car
C.J. Smith and a Model T race car after the New York to Seattle Transcontinental Endurance Race, 1909. [via The Henry Ford]

Critical Mud: Structuring and Sanitizing the Dashboard

Now that dashboards  and the epistemologies and politics they emblematize — have proliferated so widely, across such diverse fields, we need to consider how they frame our vision, what “mud” they bracket out, and how the widgetized screen-image of our cities and regions reflects or refracts the often-dirty reality. In an earlier article for Places, I outlined a rubric for critically analyzing urban interfaces. Here, I’ll summarize some key points and highlight issues that are particularly pertinent to urbandashboards:
First, the dashboard is an epistemological and methodological pastiche. It represents the many ways a governing entity can define what variables are important (and, by extension, what’s not important) and the various methods of “operationalizing” those variables and gathering data. Of course, whatever is not readily operationalizable or measurable is simply bracketed out. A city’s chosen “key performance indicators,” as Rob Kitchin and colleagues observe, “become normalized as a de facto civic epistemology through which a public administration is measured and performance is communicated.” 37
The dashboard also embodies the many ways of rendering that data representablecontextualizable, and intelligible to a target audience that likely has only a limited understanding of how the data are derived. 38Hookway notes that “the history of the interface” — or, in our case, the dashboard — is also a “history of intelligences … it delimits the boundary condition across which intelligences are brought into a common expression so as to be tested, demonstrated, reconciled, and distributed.” 39 On our urban dashboards we might see a satellite weather map next to a heat map of road traffic, next to a ticker of city expenditures, next to a word-cloud “mood index” drawing on residents’ Twitter and Facebook updates. This juxtaposition represents a tremendous variety of lenses on the city, each with its own operational logic, aesthetic, and politics. Viewers can scan across data streams, zoom out to get the big picture, zoom in to capture detail; and this flexibility, as Kitchin and colleagues write, improves “a user’s ‘span of control’ over a large repository of voluminous, varied and quickly transitioning data … without the need for specialist analytics skills.” 40However, while the dashboard’s streamlined displays and push-button inputs may lower barriers to entry for users, the dashboard frame — designed, we must recall, to keep out the mud — also does little to educate those users about where the data come from, or about the politics of information visualization and knowledge production. 41
Dublin Dashboard
One view of the Dublin Dashboard: bike availability, parking capacity, and travel time.
In turn, those representational logics and politics structure the agency and subjectivity of the dashboard’s users. These tools do not merely define the roles of the user — e.g. passive or active data-provider, data monitor, data hacker, app builder, user-of-data-in-citizen-led-urban-redevelopment — they also construct her as an urban subject and define, in part, how she conceives of, relates to, and inhabits her city. Thus, the system also embodies a kind of ontology: it defines what the city is and isn’t, by choosing how to represent its parts. If a city is understood as the sum of its component widgets — weather plus crime statistics plus energy usage plus employment data — residents have an impoverished sense of how they can act as urban subjects. Citizens may be encouraged to use a city’s open data, to build layers on top of the dashboard, to develop their own applications; but even these applications, if they’re to be functional, have to adhere to the dashboard’s protocols.
If the city is understood as the sum of its component widgets, residents have an impoverished sense of how they can act as urban subjects.
For the dashboard’s governing users, the system shapes decision-making and promotes data-driven approaches to leadership. As we noted earlier, dashboards are intended not merely to allow officials to monitor performance and ensure “accountability,” but also to make predictions and projections — and then to change the system in order to render the city more sustainable or profitable or efficient. As Kitchin and colleagues propose, dashboards allow for macro, longitudinal views of a city’s operations and offer an “evidence base far superior to anecdote.” 42
The risk here is that the dashboard’s seeming comprehensiveness and seamlessness suggest that we can “govern by Blackberry” — or “fly by instrument” — alone. Such instrumental approaches (given most officials’ disinclination to reflect on their own methods) can foster the fetishization and reification of data, and open the door to analytical error and logical fallacy. 43 As Adam Greenfield explains:
Correlation isn’t causation, but that’s a nicety that may be lost on a mayor or a municipal administration that wants to be seen as vigorously proactive. If fires disproportionately seem to break out in neighborhoods where lots of poor people live, hey, why not simply clear the poor people out and take credit for doing something about fire? After all, the city dashboard you’ve just invested tens of millions of dollars in made it very clear that neighborhoods that had the one invariably had the other. But maybe there was some underlying, unaddressed factor that generated both fires and the concentration of poverty. (If this example strikes you as a tendentious fabulation, or a case ofreductio ad absurdum, trust me: the literature of operations research is replete with highly consequential decisions made on grounds just this shoddy.) 44
Cities are messy, complex systems, and we can’t understand them without the methodological and epistemological mud. Given that much of what we perceive on our urban dashboards is sanitized, decontextualized, and necessarily partial, we have to wonder, too, about the political and ethical implications of this framing: what ideals of “openness” and “accountability” and “participation” are represented by the sterilized quasi-transparency of the dashboard?

Getting Back to the Dirt

Contrast the dashboard’s panoptic view of the city with that of another urban dashboard from the late 19th century, when the term was still used primarily to refer to mud shields. The Outlook Tower in Edinburgh, Scotland, began in the 1850s as an observatory with a camera obscura on the top floor. Patrick Geddes, Scottish polymath and town planner, bought the building in 1892 and transformed it into a “place of outlook and … a type-museum which would serve not only as a key to a better understanding of Edinburgh and its region, but as a help towards the formation of clearer ideas of the city’s relation to the world at large.” 45 This “sociological laboratory” — which Anthony Townsend, in Smart Cities, describes as a “Victorian precursor” to Rio’s digital dashboard — embodied Geddes’s commitment to the methods of observation and the civic survey, and his conviction that one must understand a place within its regional and historical contexts. 46 Here, I’ll quote at length from two historical journal articles, not only because they provide an eloquent explication of Geddes’s pedagogical philosophy and urban ideology, but also because their rhetoric provides such stark contrast to the functionalist, Silicon Valley lingo typically used to talk about urban dashboards today.
Outlook Tower. [from Patrick Geddes, Cities in Evolution, 1915]
Outlook Tower. [from Patrick Geddes, Cities in Evolution, 1915]
The tower’s visitors were instructed to begin at the top, in the camera obscura, where they encountered projections of familiar city scenes — “every variety of modern life,” from the slums to the seats of authority — and where they could not “fail to be impressed with the relation of social conditions to topography,” as Charles Zueblin reported in 1899, in The American Journal of Sociology. The camera obscura, he wrote, “combines for the sociologist the advantages of the astronomical observatory and the miscoscopical laboratory. One sees both near and distant things.” Continuing:
One has a wider field of view than can be enjoyed by the naked eye, and at the same time finds more beautiful landscape thrown on the table by the elimination of some of the discordant rays of light. One sees at once with the scientist’s and the artist’s eye. The great purpose of the camera obscura is to teach right methods of observation, to unite the aesthetic pleasure and artistic appreciation with which observation begins, and which should be habitual before any scientific analysis is entered upon, with the scientific attitude to which every analysis should return. 47
This apparatus offers both a macro view and the opportunity to “zoom in” on the details, which is a feature of interactive digital dashboards, too. But here that change in scale is informed by an aesthetic sensibility, and an awareness of the implications of the scalar shift.
“On the Terrace Roof,” according to a 1906 exhibition review, “one has again an opportunity of surveying the Edinburgh Region, but in the light of day and in the open air” — and, Zueblin notes, “with a deeper appreciation because of the significance given to the panorama by its previous concentration” in the camera obscura:
Here the observer has forced upon him various aspects of the world around him; weather conditions, the configuration of the landscape, the varying aspect of the gardens as the seasons pass, our relation to the sun with its time implications, the consideration of direction of orientation, etc. 48
Descending the floors, visitors encountered exhibitions — charts, plans, maps, models, photos, sketches, etc. — that situated them within their spatial contexts at increasing scale: first the archaeology and historical evolution of Edinburgh; then the topography, history, and social conditions of Scotland; then the empire, with an alcove for the United States; Europe; and, finally, the Earth. (Zueblin admits that this last part of the exhibition, which in 1899 lacked the great globe that Geddes hoped to install, was underdeveloped.) Along the way, visitors came across various scientific instruments and conventions — a telescope, a small meteorological station, a set of surveying instruments, geological diagrams — that demonstrated how one gained insight into space at various scales.
“The ascent of the tower provides one with a cyclopaedia,” Zueblin observes, “the descent, a laboratory. … In the basement we find the results, not only of the processes carried on above, but also classifications of the arts and sciences, from Aristotle or Bacon to Comte and Spencer, and we incidentally have light thrown on the intellectual development of the presiding genius here.” 49 The building thus embodied various modes of understanding; it was a map of intellectual history.
At the same time, the tower gave shape to Geddes’s synthetic pedagogy: one that began with the present day and dug deeper into history, and one that started at home and extended outward into the region, the globe, and perhaps even the galaxy. The Tower impressed upon its visitors a recognition that, in order to “understand adequately his region,” they needed to integrate insights from various fields of specialization: biology, meteorology, astronomy, history, geology — yes, even those who study the mud and rocks thrown into the vehicle. 50
Today’s urban dashboards fail to promote a similarly rich experiential, multidisciplinary pedagogy and epistemology. The Outlook Tower was both a dashboard and its own epistemological demystifier — as well as a catapult to launch its users out into the urban landscape itself. It demonstrated that “to use results intelligently the geographer must have some knowledge of how they are obtained” — where the data come from. 51 The lesson here is that we can’t know our cities merely through a screen. From time to time, we also need to fly by sight, fiddle with exploding radiators, and tramp around in the mud.