We know what rocket science looks like in the movies: a windowless bunker filled with blinking consoles, swivel chairs, and shirt-sleeved men in headsets nonchalantly relaying updates from “Houston” to outer space. Lately, that vision of Mission Control has taken over City Hall. NASA meets Copacabana, proclaimed the New York Times, hailing Rio de Janeiro’s Operations Centeras a “potentially lucrative experiment that could shape the future of cities around the world.” The Times photographed an IBM executive in front of a seemingly endless wall of screens integrating data from 30 city agencies, including transit video, rainfall patterns, crime statistics, car accidents, power failures, and more. 1
Futuristic control rooms have proliferated in dozens of global cities. Baltimore has its CitiStat Room, where department heads stand at a podium before a wall of screens and account for their units’ performance. 2 The Mayor’s office in London’s City Hall features a 4×3 array of iPads mounted in a wooden panel, which seems an almost parodic, Terry Gilliam-esque take on the Brazilian Ops Center. Meanwhile, British Prime Minister David Cameron commissioned an iPad app – the “No. 10 Dashboard” (a reference to his residence at 10 Downing Street) – which gives him access to financial, housing, employment, and public opinion data. As The Guardian reported, “the prime minister said that he could run government remotely from his smartphone.” 3
This is the age of Dashboard Governance, heralded by gurus like Stephen Few, founder of the “visual business intelligence” and “sensemaking” consultancy Perceptual Edge, who defines the dashboard as a “visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance.” A well-designed dashboard, he says — one that makes proper use of bullet graphs, sparklines, and other visualization techniques informed by the “brain science” of aesthetics and cognition — can afford its users not only a perceptual edge, but a performance edge, too. 4 The ideal display offers a big-picture view of what is happening in real time, along with information on historical trends, so that users can divine the how and whyand redirect future action. As David Nettleton emphasizes, the dashboard’s utility extends beyond monitoring “the current situation”; it also “allows a manager to … make provisions, and take appropriate actions.” 5
Juice Software, KnowNow, Rapt … the names conjured up visions of an Omniscient Singularity fueled by data, hubris, and Adderall.
In 2006, when Few published the first edition of his Information Dashboard Design manual, folks were just starting to recognize the potential of situated media. Design critic John Thackara foretold an emerging market for “global spreadsheets” (his term for data displays) that could monitor the energy use of individual buildings or the ecological footprint of entire cities and regions. Thackara identified a host of dashboard players already on the scene — companies like Juice Software, KnowNow, Rapt, Arzoon, ClosedloopSolutions, SeeBeyond, and CrossWorlds — whose names conjured up visions of an Omniscient Singularity fueled by data, hubris, and Adderall. 6
By now we know to interpret the branding conceits of tech startups with amused skepticism, but those names reflect a recognition that dashboard designers are in the business of translating perception into performance, epistemology into ontology. 7They don’t merely seek to display information about a system but to generate insights that human analysts use to changethat system — to render it more efficient or sustainable or profitable, depending upon whatever qualities are valued. The prevalence and accessibility of data are changing the way we see our cities, in ways that we can see more clearly when we examine the history of the urban dashboard.
From Bloomberg Terminals to Bloomberg’s New York
Data displays often mimic the dashboard instrumentation of cars or airplanes. Where in a car you’d find indicators for speed, oil, and fuel levels, here you’ll find widgets representing your business’s “key performance indicators”: cash flow, stocks, inventory, and so forth. Bloomberg terminals, which debuted in 1982, allowed finance professionals to customize their multi-screen displays with windows offering real-time and historical data regarding equities, fixed-income securities, and derivatives, along with financial news feeds and current events (because social uprisings and natural disasters have economic consequences, too), and messaging windows, where traders could provide context for the data scrolling across their screens. Over the last three decades, the terminals have increased in complexity. As in a flight cockpit, the Bloomberg systems involve custom input devices: a specialized keyboard with color-coded keys for various kinds of shares, securities, markets, and indices; and the B-UNIT® portable scanner that can biometrically authenticate users on any computer or mobile device. The Bloomberg dashboard is no longer locked into the iconic two-screen display; traders can now access the dashboard “environment” on a variety of devices, just as David Cameron can presumably govern a nation via BlackBerry.
The Enron scandal incited a cultural shift … Chief Information Officers finally embraced the dashboard’s panoptic view.
The widespread adoption of the Bloomberg terminal notwithstanding, it took a while for dashboards to catch on in the corporate world. Stephen Few reports that during much of the ’80s and ’90s, large companies focused on amassing data, without carefully considering which indicators were meaningful or how they should be analyzed. He argues that the 2001 Enron scandal incited a cultural shift. Recognizing the role of data in corporate accountability and ethics, the Chief Information Officers of major companies finally embraced the dashboard’s panoptic view. I’d add another reason: before dashboards could diffuse into the zeitgeist, we needed a recognized field of data science and a cultural receptivity to data-driven methodologies and modes of assessment.
The dashboard market now extends far beyond the corporate world. In 1994, New York City police commissioner William Bratton adapted former officer Jack Maple’s analog crime maps to create the CompStat model of aggregating and mapping crime statistics. Around the same time, the administrators of Charlotte, North Carolina, borrowed a business idea — Robert Kaplan’s and David Norton’s “total quality management” strategy known as the “Balanced Scorecard” — and began tracking performance in five “focus areas” defined by the City Council: housing and neighborhood development, community safety, transportation, economic development, and the environment. Atlanta followed Charlotte’s example in creating its own city dashboard. 8
In 1999, Baltimore mayor Martin O’Malley, confronting a crippling crime rate and high taxes, designed CitiStat, “an internal process of using metrics to create accountability within his government.” (This rhetoric of data-tested internal “accountability” is prevalent in early dashboard development efforts.) 9 The project turned to face the public in 2003, when Baltimore launched a website of city operational statistics, which inspired DCStat (2005), Maryland’s StateStat (2007), and NYCStat (2008). 10 Since then, myriad other states and metro areas — driven by a “new managerialist” approach to urban governance, committed to “benchmarking” their performance against other regions, and obligated to demonstrate compliance with sustainability agendas — have developed their own dashboards. 11
The Open Michigan Mi Dashboard is typical of these efforts. The state website presents data on education, health and wellness, infrastructure, “talent” (employment, innovation), public safety, energy and environment, financial health, and seniors. You (or “Mi”) can monitor the state’s performance through a side-by-side comparison of “prior” and “current” data, punctuated with a thumbs-up or thumbs-down icon indicating the state’s “progress” on each metric. Another click reveals a graph of annual trends and a citation for the data source, but little detail about how the data are actually derived. How the public is supposed to use this information is an open question.
Some early dashboard projects have already been abandoned, and others have gone on hiatus while they await technical upgrades. The now-dormantLIVE Singapore! project, a collaboration of MIT’s Senseable City Lab and the Singapore-MIT Alliance for Research and Technology (SMART), was intended to be an “open platform” for the collection, combination, and distribution of real-time data, and a “toolbox” that developer communities could use to build their own civic applications. 12 The rise of smartphones and apps has influenced a new wave of projects that seek not just to visualize data but to give us something to do with it, or layer on top of it.
Over the past several years, a group of European cities has been collaborating on the development of urbanAPI, which proposes to help planners engage citizens in making decisions about urban development. Boston’s Citizens Connect has more modest aspirations: it allows residents to report potholes, damaged signs, and graffiti. Many projects have scaled back their “built-in” civic engagement aspirations even further. Citizens’ agency is limited to accessing data, perhaps customizing the dashboard interface and thereby determining which sources are prioritized, and supplying some of that data passively (often unwittingly) via their mobile devices or social media participation. If third parties wish to use the data represented on these platforms in order to develop their own applications, they’re free to do so — but the platforms themselves involve few, if any, active participation features.
In 2012, London launched an “alpha” prototype of the City Dashboard that powers the mayor’s wall of iPads. 13 Created by the Bartlett Centre for Advanced Spatial Analysis at University College London, and funded by the government through the National e-Infrastructure for Social Simulation, the web-based platform features live information on weather, air quality, train status, and surface transit congestion, as well as local news. 14 Data provided by city agencies are supplemented by CASA’s own sensors (and, presumably, by London’s vast network of CCTV cameras). In aggregate, these sources are meant to convey the “pulse” of London. Other urban cadences are incorporated via social media trends, including tweets from city media outlets and universities, along with a “happiness index” based on an “affect analysis” of London’s social media users. 15 The CASA platform has also been deployed in other UK cities, from Glasgow to Brighton.
By now these dashboard launches are so common that we begin to see patterns. Dublin’s dashboard, released just last fall by the Programmable Cityproject and the All-Island Research Observatory at Maynooth University, integrates data from numerous sources — Dublin City Council, the regional data-sharing initiative Dublinked, the Central Statistics Office, Eurostat, and various government departments — and presents it via real-time and historical data visualizations and interactive maps. The platform is intended to help its audiences — citizens, public employees, and businesses — with their own decision-making and “evidence-informed analysis,” and to encourage the independent development of visualizations and applications. 16
Urban dashboard projects embody a variety of competing ideologies.
Such projects embody a variety of competing ideologies. They open up data to public consumption and use. They render a city’s infrastructures visible and make tangible, or in some way comprehensible, various hard-to-grasp aspects of urban quality-of-life, including environmental metrics and, in the case of the happiness index, perhaps even mental health. Yet at the same time these platforms often cultivate a top-down, technocratic vision that, as Paola Ciuccarelli and colleagues argue, “can be problematic, especially if matters such as the active engagement of all the stakeholders involved in designing, operating, and controlling these dashboards are not properly addressed.” 17What’s more, these urban dashboards perpetuate the fetishization of data as a “monetizable” resource and a positivist epistemological unit — and they run the risk of framing the city as a mere aggregate of variables that can be measured and “optimized” to produce an efficient or normative system. 18
A History of Cockpits and Control
The dashboard as “frame” — of human agency, of epistemologies and ideologies, of the entities or systems it operationalizes through its various indicators — has a history that extends back much farther than ’80s-era stock brokerage desks and ’90s crime maps. Likewise, the dashboard’s relation to the city and the region — to space in general — predates this century’s interactive maps and apps. The term dashboard, first used in 1846, originally referred to the board or leather apron on the front of a vehicle that kept horse hooves and wheels from splashing mud into the interior. Only in 1990, according to the Oxford English Dictionary, did the term come to denote a “screen giving a graphical summary of various types of information, typically used to give an overview of (part of) a business organization.” The acknowledged partiality of the dashboard’s rendering might make us wonder what is bracketed out. Why, all the mud of course! All the dirty (un-“cleaned”) data, the variables that have nothing to do with key performance (however it’s defined), the parts that don’t lend themselves to quantification and visualization. All the insight that doesn’t accommodate tidy operationalization and air-tight widgetization: that’s what the dashboard screens out.
All the insight that doesn’t accommodate tidy operationalization and air-tight widgetization: that’s what the dashboard screens out.
Among the very pragmatic reasons that particular forces, resources, and variables have historically thwarted widgetization is that we simply lacked the means to regulate their use and measure them. The history of the dashboard, then, is simultaneously a history of precision measurement, statistics, instrument manufacturing, and engineering — electrical, mechanical, and particularly control engineering. 19 Consider the dashboard of the Model T Ford. In 1908, the standard package consisted solely of an ammater, an instrument that measured electrical current, although you could pay extra for a speedometer. You cranked the engine to start it (by 1919 you could pay more to add an electric starter), and once the engine was running, you turned the ignition switch from “battery” to “magneto.” There was no fuel gauge until 1909; before then, you dipped a stick in the fuel tank to test your levels. Water gushing from the radiator, an indicator you hoped not to see, was your “engine temperature warning system.” As new means of measurement emerged, new gauges and displays appeared.
And then things began to evolve in the opposite direction: as more and more mechanical operations were automated, the dashboard evolved to relay their functioning symbolically, rather than indexically. By the mid-50s, the oil gauge on most models was replaced by a warning, or “idiot,” light. The driver needed only a binary signal: either (1) things are running smoothly; or (2) something’s wrong; panic! 20 The “Maintenance Required” light came to indicate a whole host of black-boxed measurements. The dashboard thus progressively simplified the information relayed to the driver, as much of the hard intellectual and physical labor of driving was now done by the car itself.
Dashboard design in today’s automobiles is driven primarily by aesthetics. It’s currently fashionable to give the driver lots of information — most of which has little impact on her driving behavior — so she feels in control of this powerful machine. Most “key performance indicators” have little to do with the driver’s relationship to the car. Just as important are her relationship to (1) the gas tank, (2) her Bluetooth-linked iPhone, and (3) the state trooper’s radar gun. 21 While some “high-performance” automobiles are designed to make drivers feel like they’re piloting a fighter jet, the dashboard drama is primarily for show. It serves both to market the car and to cultivate the identity and agency of the driver: this assemblage of displays requires a new literacy in the language and aesthetics of the interface, which constitutes its own form of symbolic, if not mechanical, mastery.
In an actual fighter jet, of course, all those gauges play a more essential operational role. As Frederick Teichmann wrote, in his 1942 Airplane Design Manual, “All control systems terminate in the cockpit; all operational and navigational instruments are located here; all decisions regarding the flight of the airplane, with … very few exceptions … are determined here.” 22 Up through the late ’20s or early ’30s, however, pilots had few instruments to consult. World War I pilots, according to Branden Hookway, were “expected to rely almost solely on unmediated visual data and ‘natural instinct’ for navigation, landing, and target sighting”; navigation depended on a mixture of “dead reckoning (estimating one’s position using log entries, compass, map, etc., in absence of observation) and pilotage (following known landmarks directly observed from the air).” 23 And while some instruments — altimeter, airspeed indicator, hand-bearing compass drift sight, course and direction calculator, and oil pressure and fuel gauges — had become available by the war’s end, they were often inaccurate and illegible, and most pilots continued to fly by instinct and direct sight.
Throughout the 1920s, research funded by the military and by instrument manufacturers like Sperry sought to make “instrument flying” more viable. By 1928, Teichmann writes, pilots were flying faster, more complicated planes and could no longer “trust their own senses at high altitudes or in fogs or in cross-country flights or in blind flying”:
They must rely, for safety’s sake, almost entirely on radio communication, radio beacons, range compass findings, gyroscopic compasses, automatic pilots, turn and bank indicators, and at least twenty-five or more other dials and gadgets essential to the safe operation of the airplane in all kinds of weather. 24
In short, they came to depend on the dashboard for their survival. The instrumentation of piloting represented a new step in automation, according to Jones and Watson, authors of Digital Signal Processing. For the first time, automated processes began “replacing sensory and cognitive processes as well as manipulative processes.” 25 Dashboards manifested the “perceptual edge” of machines over their human operators.
Still, the dashboard and its user had to evolve in response to one another. The increasing complexity of the flight dashboard necessitated advanced training for pilots — particularly through new flight simulators — and new research on cockpit design. 26 Hookway argues that recognizing the cockpit-as-interface led to the systematized design of flight instrumentation that would streamline the flow of information. Meanwhile, recognizing the cockpit-as-environment meant that designers had to attend to the “physiological and psychological needs of pilot and aircrew,” which were shaped by the cramped quarters, noise, cold temperatures, and reduced atmospheric pressure of the plane. 27 Military applications also frequently required communication and coordination among pilots, co-pilots, navigators, bomb operators, and other crew members, each of whom relied on his own set of instruments. 28
The Control Room as Immersive Dashboard
Before long, the cockpit grew too large for the plane:
Phone lines linked controllers to the various airfields, which communicated with individual planes by high-frequency radio. A special red hotline went directly to Fighter Command headquarters at Bentley Priory. Plotters hovered around the situation map. … A vast electric tableau, glowing in a bewildering array of colored lights and numbers, spanned the wall opposite the viewing cabin like a movie curtain. On this totalizator, or tote board, controllers could see at a glance the pertinent operational details — latest weather, heights of the balloon barrage layer guarding key cities, and most especially, fighter status.
That was the Control Room of No. 11 Group of the RAF Fighter Command, at Uxbridge, England, in September 1940, as described by Robert Buderi in his book on the history of radar. 29 The increasing instrumentation of flight and other military operations, and the adoption of these instrumental control strategies by government and business, led to the creation of immersive environments of mosaic displays, switchboards, and dashboards — from Churchill’s War Rooms to the Space Age’s mythologized mission control.
The push-button changed the way we started our cars, summoned our servants, dialed our phones, manufactured our Space Sprockets, and waged our wars.
In the early 1970s, under Salvador Allende, Chile attempted to implement Project Cybersyn, a cybernetics-informed decision-support system for managing the nation’s economy. The hexagonal “Opsroom” was its intellectual and managerial hub, where leaders could access data, make decisions, and transmit advice to companies and financial institutions via telex. 30 Four of the room’s six walls offered space for “dashboards.” 31One featured four “datafeed” screens housed in fiberglass cabinets. Using a button console on their chair armrests, administrators could control which datafeed was displayed — graphs of production capacities, economic charts, photos of factories, and so forth. It was a proud moment for the humble push-button — that primary means of offering binary input into our dashboards — which, in the course of a century, changed the way we started our cars, summoned our servants, dialed our phones, manufactured our Space Sprockets, and (demonstrating its profound ethical implications) waged our wars. Media historian Till Heilmann, who is investigating the push-button as an integral element in the history of digital technology, argues that pushing buttons — a practice that he traces back to operation of the electric telegraph (but which might go back farther, to the design of musical instruments) — is among the most important “cultural techniques” of the industrial and post-industrial ages. 32
Another of the Opsroom’s walls featured two screens with algedonic alerts: red lights that blinked with increasing frequency to reflect the escalating urgency of problems in the system. On yet another wall, Cybersyn architect Stafford Beers installed a display for his Variable System Model, which helped “participants remember the cybernetic principles that supposedly guided their decision-making processes.” 33 The final “data” wall featured a large metal surface, covered with fabric, on which users could rearrange magnetic icons that represented components of the economy. The magnets offered an explicit means of analog visualization and play, yet even the seemingly interactive “datafeed” screens were more analog than they appeared. Although the screens resembled flat-panel LCDs, they were actually illuminated from the rear by slide projectors behind the walls. The slides themselves were handmade and photographed. The room’s futuristic Gestalt — conveyed by those streamlined dashboards, with their implication of low-barrier-to-entry, push-button agency — was a fantasy. “Maintaining this [high-tech] illusion,” Eden Medina observes, “required a tremendous amount of human labor” behind the screens. 34
Screen interfaces embody in their architectures particular ways of thinking and particular power structures, which we must critically analyze.
Cybersyn’s lessons have filtered down through the years to inform the design of more recent control rooms. In a 2001 edited volume on control room design, various authors advocated for the simultaneous consideration of human-computer interaction and human cognition and ergonomics. They addressed the importance of discerning when it’s appropriate to display “raw” data sets and when to employ various forms of data visualization. They advocated for dashboarded environments designed to minimize human error, maximize users’ “situation awareness” and vigilance, facilitate teamwork, and cultivate “trust” between humans and machines. 35
We might read a particular ideology in the design of Baltimore’s CitiStat room, which forces department managers to stand before the data that are both literally and methodologically behind their operations. The stage direction reassures us that it is those officials’ job to tame the streams of data — to contextualize this information so that it can be marshaled as evidence of “progress.” The screen interfaces themselves — those “control rooms in a box,” we might say — embody in their architectures particular ways of thinking and particular power structures, which we must critically analyze if we’re using these structures as proxies for our urban operations. 36
Critical Mud: Structuring and Sanitizing the Dashboard
Now that dashboards — and the epistemologies and politics they emblematize — have proliferated so widely, across such diverse fields, we need to consider how they frame our vision, what “mud” they bracket out, and how the widgetized screen-image of our cities and regions reflects or refracts the often-dirty reality. In an earlier article for Places, I outlined a rubric for critically analyzing urban interfaces. Here, I’ll summarize some key points and highlight issues that are particularly pertinent to urbandashboards:
First, the dashboard is an epistemological and methodological pastiche. It represents the many ways a governing entity can define what variables are important (and, by extension, what’s not important) and the various methods of “operationalizing” those variables and gathering data. Of course, whatever is not readily operationalizable or measurable is simply bracketed out. A city’s chosen “key performance indicators,” as Rob Kitchin and colleagues observe, “become normalized as a de facto civic epistemology through which a public administration is measured and performance is communicated.” 37
The dashboard also embodies the many ways of rendering that data representable, contextualizable, and intelligible to a target audience that likely has only a limited understanding of how the data are derived. 38Hookway notes that “the history of the interface” — or, in our case, the dashboard — is also a “history of intelligences … it delimits the boundary condition across which intelligences are brought into a common expression so as to be tested, demonstrated, reconciled, and distributed.” 39 On our urban dashboards we might see a satellite weather map next to a heat map of road traffic, next to a ticker of city expenditures, next to a word-cloud “mood index” drawing on residents’ Twitter and Facebook updates. This juxtaposition represents a tremendous variety of lenses on the city, each with its own operational logic, aesthetic, and politics. Viewers can scan across data streams, zoom out to get the big picture, zoom in to capture detail; and this flexibility, as Kitchin and colleagues write, improves “a user’s ‘span of control’ over a large repository of voluminous, varied and quickly transitioning data … without the need for specialist analytics skills.” 40However, while the dashboard’s streamlined displays and push-button inputs may lower barriers to entry for users, the dashboard frame — designed, we must recall, to keep out the mud — also does little to educate those users about where the data come from, or about the politics of information visualization and knowledge production. 41
In turn, those representational logics and politics structure the agency and subjectivity of the dashboard’s users. These tools do not merely define the roles of the user — e.g. passive or active data-provider, data monitor, data hacker, app builder, user-of-data-in-citizen-led-urban-redevelopment — they also construct her as an urban subject and define, in part, how she conceives of, relates to, and inhabits her city. Thus, the system also embodies a kind of ontology: it defines what the city is and isn’t, by choosing how to represent its parts. If a city is understood as the sum of its component widgets — weather plus crime statistics plus energy usage plus employment data — residents have an impoverished sense of how they can act as urban subjects. Citizens may be encouraged to use a city’s open data, to build layers on top of the dashboard, to develop their own applications; but even these applications, if they’re to be functional, have to adhere to the dashboard’s protocols.
If the city is understood as the sum of its component widgets, residents have an impoverished sense of how they can act as urban subjects.
For the dashboard’s governing users, the system shapes decision-making and promotes data-driven approaches to leadership. As we noted earlier, dashboards are intended not merely to allow officials to monitor performance and ensure “accountability,” but also to make predictions and projections — and then to change the system in order to render the city more sustainable or profitable or efficient. As Kitchin and colleagues propose, dashboards allow for macro, longitudinal views of a city’s operations and offer an “evidence base far superior to anecdote.” 42
The risk here is that the dashboard’s seeming comprehensiveness and seamlessness suggest that we can “govern by Blackberry” — or “fly by instrument” — alone. Such instrumental approaches (given most officials’ disinclination to reflect on their own methods) can foster the fetishization and reification of data, and open the door to analytical error and logical fallacy. 43 As Adam Greenfield explains:
Correlation isn’t causation, but that’s a nicety that may be lost on a mayor or a municipal administration that wants to be seen as vigorously proactive. If fires disproportionately seem to break out in neighborhoods where lots of poor people live, hey, why not simply clear the poor people out and take credit for doing something about fire? After all, the city dashboard you’ve just invested tens of millions of dollars in made it very clear that neighborhoods that had the one invariably had the other. But maybe there was some underlying, unaddressed factor that generated both fires and the concentration of poverty. (If this example strikes you as a tendentious fabulation, or a case ofreductio ad absurdum, trust me: the literature of operations research is replete with highly consequential decisions made on grounds just this shoddy.) 44
Cities are messy, complex systems, and we can’t understand them without the methodological and epistemological mud. Given that much of what we perceive on our urban dashboards is sanitized, decontextualized, and necessarily partial, we have to wonder, too, about the political and ethical implications of this framing: what ideals of “openness” and “accountability” and “participation” are represented by the sterilized quasi-transparency of the dashboard?
Getting Back to the Dirt
Contrast the dashboard’s panoptic view of the city with that of another urban dashboard from the late 19th century, when the term was still used primarily to refer to mud shields. The Outlook Tower in Edinburgh, Scotland, began in the 1850s as an observatory with a camera obscura on the top floor. Patrick Geddes, Scottish polymath and town planner, bought the building in 1892 and transformed it into a “place of outlook and … a type-museum which would serve not only as a key to a better understanding of Edinburgh and its region, but as a help towards the formation of clearer ideas of the city’s relation to the world at large.” 45 This “sociological laboratory” — which Anthony Townsend, in Smart Cities, describes as a “Victorian precursor” to Rio’s digital dashboard — embodied Geddes’s commitment to the methods of observation and the civic survey, and his conviction that one must understand a place within its regional and historical contexts. 46 Here, I’ll quote at length from two historical journal articles, not only because they provide an eloquent explication of Geddes’s pedagogical philosophy and urban ideology, but also because their rhetoric provides such stark contrast to the functionalist, Silicon Valley lingo typically used to talk about urban dashboards today.
The tower’s visitors were instructed to begin at the top, in the camera obscura, where they encountered projections of familiar city scenes — “every variety of modern life,” from the slums to the seats of authority — and where they could not “fail to be impressed with the relation of social conditions to topography,” as Charles Zueblin reported in 1899, in The American Journal of Sociology. The camera obscura, he wrote, “combines for the sociologist the advantages of the astronomical observatory and the miscoscopical laboratory. One sees both near and distant things.” Continuing:
One has a wider field of view than can be enjoyed by the naked eye, and at the same time finds more beautiful landscape thrown on the table by the elimination of some of the discordant rays of light. One sees at once with the scientist’s and the artist’s eye. The great purpose of the camera obscura is to teach right methods of observation, to unite the aesthetic pleasure and artistic appreciation with which observation begins, and which should be habitual before any scientific analysis is entered upon, with the scientific attitude to which every analysis should return. 47
This apparatus offers both a macro view and the opportunity to “zoom in” on the details, which is a feature of interactive digital dashboards, too. But here that change in scale is informed by an aesthetic sensibility, and an awareness of the implications of the scalar shift.
“On the Terrace Roof,” according to a 1906 exhibition review, “one has again an opportunity of surveying the Edinburgh Region, but in the light of day and in the open air” — and, Zueblin notes, “with a deeper appreciation because of the significance given to the panorama by its previous concentration” in the camera obscura:
Here the observer has forced upon him various aspects of the world around him; weather conditions, the configuration of the landscape, the varying aspect of the gardens as the seasons pass, our relation to the sun with its time implications, the consideration of direction of orientation, etc. 48
Descending the floors, visitors encountered exhibitions — charts, plans, maps, models, photos, sketches, etc. — that situated them within their spatial contexts at increasing scale: first the archaeology and historical evolution of Edinburgh; then the topography, history, and social conditions of Scotland; then the empire, with an alcove for the United States; Europe; and, finally, the Earth. (Zueblin admits that this last part of the exhibition, which in 1899 lacked the great globe that Geddes hoped to install, was underdeveloped.) Along the way, visitors came across various scientific instruments and conventions — a telescope, a small meteorological station, a set of surveying instruments, geological diagrams — that demonstrated how one gained insight into space at various scales.
“The ascent of the tower provides one with a cyclopaedia,” Zueblin observes, “the descent, a laboratory. … In the basement we find the results, not only of the processes carried on above, but also classifications of the arts and sciences, from Aristotle or Bacon to Comte and Spencer, and we incidentally have light thrown on the intellectual development of the presiding genius here.” 49 The building thus embodied various modes of understanding; it was a map of intellectual history.
At the same time, the tower gave shape to Geddes’s synthetic pedagogy: one that began with the present day and dug deeper into history, and one that started at home and extended outward into the region, the globe, and perhaps even the galaxy. The Tower impressed upon its visitors a recognition that, in order to “understand adequately his region,” they needed to integrate insights from various fields of specialization: biology, meteorology, astronomy, history, geology — yes, even those who study the mud and rocks thrown into the vehicle. 50
Today’s urban dashboards fail to promote a similarly rich experiential, multidisciplinary pedagogy and epistemology. The Outlook Tower was both a dashboard and its own epistemological demystifier — as well as a catapult to launch its users out into the urban landscape itself. It demonstrated that “to use results intelligently the geographer must have some knowledge of how they are obtained” — where the data come from. 51 The lesson here is that we can’t know our cities merely through a screen. From time to time, we also need to fly by sight, fiddle with exploding radiators, and tramp around in the mud.