What is the Array of Things initiative?

As an urban-scale sensing network that collects real-time environmental data in cities, the AoT initiative exemplifies this trend. This is an initiative led by Charlie Catlett and the researchers from the Urban Center for Computation and Data, a joint initiative of the Argonne National Laboratory and the University of Chicago. Launched in 2016 and currently implemented in Chicago, the data collected through this initiative is open and free to the public.

What is this application for?

Our project seeks to evaluate the level of reliability of the data collected by the AoT network. We provide a method to numerically score daily network data reliability for the network in terms of different data parameters. Based on this score metric, we hope that planners could easily identify segments of the big AoT dataset for their relevant analysis. We also hope the transparent evaluation factors and scores behind this data reliability analysis could promote a more informed use of data in the increasingly data-driven planning process. This application is to be useful in improving research efficiency, considering the increasingly large stores of sensor data available - planners will be able to scope the spatial and temporal scale of their research according to where and when reliable segments of the data is available, instead of having to explore many different datasets to finalise a suitable scope of analysis.

How do you use this application?

This website should be used following the steps:
(1) Select a parameter
(2) Select a date
(*Because of the data collection, please select the date range between December 1st 2018 - December 31st 2018: suggest start from December 15st 2018)
(3) Select raw value or imputed value
(4) Select apply button

Array of Things
Temperature Humidity Pressure PPM CO H2S NO2 O3 SO2

Scores of all nodes in one day The 5 component scores evaluating the 3 different reliability criteria are averaged to score the Overall Reliability of the AoT network for each day, for each data parameter type. The higher score, the better node behavior


Sensor Valure Reliability All the data collected should be within sensing range, as determined by the sensor specifications.

Network Sensor Value Reliability (A) At any given 10-minute time interval in any given node, an average proportion of the data collected by the temperature sensors is reliable.

Overall Consistency in Sensor Value Reliability (B) Distribution of node standard deviation scores. The high consistency score here should be interpreted in context with Score A. Here it indicates that the sensor value reliability is consistently bad.


Spatial Reliability Reliable data should be collected for the whole spatial extent of Chicago

Average Proportion of Network Active (C) Distribution of average proportions of network active at each time interval. At any given 10-minute time interval in any given node, an average proportion of the nodes in the network is collecting reliable data.

Average Proportion of Chicago Area Covered (D) At any given 10-minute time interval in any given node, reliable data is collected for an average proportion of Chicago area.


Temporal Reliability Reliable data should be collected at consistent time intervals throughout the day.

Average Proportion of Day-Duration Node is Active (E) Distribution of average proportions day-duration for which node is ‘active’. In any given node during the day, reliable data is collected for an average proportion of the time