Hidri of Architectural Informatics Technical University of Munich
Agi
Chair for Architectural Informatics Department of Architecture Technichal University of SoundscapeMunich Sonnenstraße
Project Title
Student(s) Name(s)
Chair of Architectural
03716745Hidri
2 Soundscape Sonnenstraße
Informatics
Prof. Dr.-Ing. Frank Petzold Critical Modeling Ivan Bratoev, Nick Förster, Frank Petzold Agi
3 4544422726161412864 Topic: Soundscape Prototype EthnographyResearch Workshop Sensor Prototyping I Concept Development Sensor Prototyping II Implementation Strategy Prototype Reflection and Outlook ContactMentions Table of Contents
The Soundscape summarizes a linear cooperation between a sensor, a data analysis tool, a modeling tool and an interface tool. The sensor is represented by an Arduino Uno sensor prototype, which is respnosible for collecting the raw data that fuel the Soundscape project. The raw data is passed onto Excel, which will take up the role of the data analysis tool. Exported as a .csv file, the data is then imported into Grasshopper (Rhinoceros), where it is transformed and begins taking a visual shape. Finally the visual data is imported into Unity and outputed as the final interface for the user.
This project unfolds itself in the form of a prototype, for a mainly participatory tool (could also be adapted for designing), which focuses on the aural aspect of a given environment. In this booklet, the area this model will be taking an apporach on is the Sonnenstraße in Munich. The Soundscape prototype will be working symbiotically with the digital twin model of Munich, and consequently seek to add another layer related to the sense of hearing to this virtual representation and digital counterpart of the city of Munich.
4
Soundscape Prototype
55
Prior to working on the main project, the students had to research and brainstorm a mind-map regarding different aspects of digital tools and representations in the urban context.
6 Research
Together with a partner, Maria Karaivanova, we conducted our research on the topic of ‚Simulation and Analysis‘. We elaborated on topics such as simulation types, parametric model, site analysis etc. Finally we depicted our results in a Miro board together with the other students.
7
the data I used a smartphone sound meter application, which would measure the decibel levels of the surrounding ambience. This application was my Thesensor.wayI
Topollution.gather
decided to proceed was to go down one sidewalk of the street starting from Sendlinger Tor and ending at Karlsplatz, and then go back down the sidewalk on the other side of the Sonnenstraße street connecting Karlsplatz to Sonnenstraße, this way the path enclosing a full circle of the area. The data gathering would be conducted in chosen stations, where I would walk down the street, stop to gather the data, save the data, and then move on to the following station. Stations were defined through road intersections with the main street of Sonnenestraße. After the data was gathered successfully, together with the other students I noted the data in collective cardboard mind-map of the Sonnenstraße.
Ethnography Workshop
8
The project began with an on-site immersement in the Sonnenstraße. Using simple readymade sensors the students were given the task to gather data on a chosen topic on the study area of Sonnenstraße. I decided to focus on the sound aspect of the environment with a special mindfulness payed to noise
9
After a short presentation of all the students‘ updated maps and research, a discussion in the form of a quick game about stages of the project followed. In the game the students teamed up into small groups, where each group had to think of keywords belonging to five topics, each of them devided into four categories (topic, sensor, implementation and challenges) and write them on sticky notes. Later on the students had to mix the keywords, combine them randomly between the same category, deal the keywords again so that each topic has one memo note belonging to each category, and finally narrate a connection between the keywords in the form of a coherent story.
10
After the collective mind-map on Sonnenstraße, the students had to further brainstorm on the chosen research topics. Regarding my sound analysis topic, I broadened my brainstorming and further researched into aspects, factors or possiblities closely or remotely connected to my topic. I then summarized my results into a Miro board.
11
12
Sensor Prototyping I
The next phase of the project consisted of developing a sensor prototype, with which we would be collecting the raw data. The sensor would be powered through an Arduino Uno and connected to different kinds of modules as needed to acquire data. Since my main focus is sound, I figured I need to work with a sound module. In my first sensor prototype I experimented with connecting just the sound module and an LED to Arduino Uno. I provided the prototype with a sound source and programmed the LED to react to the sound captured by the sound module. To bring this setup to work I had to adjust the in-built potentiometer of the sound module.
13
GatheringData PatternSpatial
Subsequently, the students were given the task to present their concept for the project in a more tangible way: in the form of a storyboard. The goal was to come up with two storyboard suggestions, which would offer an overview of how the project would be expected to be developed in the following weeks. Out of the two storyboards, one would be selected and implemented.
PracticesofGovernance/Participation/Planning
Visualization/Interaction Concept Development
14
In each storyboard four stages of the project would need to be thought of and given a sensible solution to. In the first aspect, data gathering, it was crucial to contemplate on how the raw data would be gathered and what kind of sensor would be needed. The second conceptual stage, spatial pattern, would explain how the sensor would collect the data, where and how it would be implemented in the concrete area of study, as well as additional questions like how often would it be expected to collect the data. The next link in this brainstorming chain, practices of governance/participation/planning, would suggest the field of implementation of the tool that was being developed. The final stage would tackle interaction and visualization strategies for the project.
15
In this concept development stage it became more clear that I would be working on creating a Soundscape for the Sonnenstraße. My two suggestions for the visualization and implementation step were either presenting a sound model or a visual model; it was decided on the latter. It is relevant to note that as I continued developing my prototype mainly in the realms of a visual model, I also incorporated some sound model elements, as an upgrade to a pure visual model. This way my research and tool would offer a wider perspective to the user.
Sensor Prototyping II
16
The storyboard concept brainstoring helped me understand better what sensor I would be needing for the project. At this point I decided to build a second sensor prototype for gathering tha data.
The GPS module I ended up using was the NEO-6m GPS module for Arduino.
For the second upgraded and more complex sensor prototype I concluded that I would be needing more Arduino modules. The way I would be collecting data on-site with my sensor would be to go down the street while carrying the sensor (similar to the very first ethnography workshop data gathering) and measure a sound decibel level value as well as location, date and time data in a given time frequency, and then save the data on an SD card. The modules I would be needing, additional to the sound module, would be a GPS module and an SD card adapter module.
The Neo-6m has four pins that need to communicate with Arduino: VCC, RX, TX and GND. The VCC and GND pins need to be connected to the 5V and GND pins on arduino accordingly. The RX and TX pins of the module are transmitter and receiver pins used for serial communication with the Arduino Uno microcontroller board. These pins need to be cross connected to communication pins on Arduino, so that the TX from the module is linked to the RX from the Arduino and the RX from the module to the TX from the Arduino.
In my case, in line with the Arduino script that I am using, I have connected the TX and RX pins from the NEO-6m to pins 3 and 4 from the microcontroller board accordingly. Also important to note is that while scripting I noticed certain libraries like TinyGPS or TinyGPS++ used for the GPS module would conflict with other libraries that I was using. My solution was to use libraries like NeoSWSerial and NMEAGPS for the GPS module.
Another noteworthy remark regarding the Arduino scripting that works in favor to a fluid running of the GPS module with the rest of the sensor components is to adoid using ‚delays‘. This is because ‚delays‘ indicate complete breaks in the Arduino functioning, which disrupt the GPS module. When the GPS module starts running for the first time after turning on Arduino, it needs some initial time to process and get ready for collecting the latitude, longitude, date and time data. Therefore each time the module gets interrupted by delays, it needs to retake its preparation time. Instead of using ‚delays‘ I would found out that it‘s better to use ‚millis‘ which correspond to the internal Arduino clock. By assigning certain tasks to run at a desired point in time each time a defined time interval is completed, this would offer Arduino short breaks from all the computing but without shutting its processes down entirely and restarting its internal clock. In my case I optimized my code for the Arduino to collect data from the sound and GPS modules every 10 seconds or 10000 millis.
17
18
The next module I needed was the SD card adapter module. To make data recording convenient I added a white button to the sensor. When the button is pressed, data recording is initiated. To indicate that data saving is running I included a white LED that lights up when the SD card adapter module is running. If the user wishes to stop recording the data, they would have to press the white button again, which would also turn the white LED off. I arranged the Arduino script, so that every 10 seconds that we would receive data from the other two modules, this information would be printed in a new single line containing the decibel value, latitude, longitude, date and time data each separated by a comma. Finally the data would be saved as a .txt file on the SD card, which offers convenience when importing the data into Excel.
Regarding the sound module coming from the first sensor prototype I added some upgrades. Firstly I converted the analogue values originating from the sound module into decibel values. For this I place the sound module and my smartphone running a decibel meter application at the same distance from a sound source which was my stereo. Then I played a constant sound at different audio levels and wrote down both outputs into a simple Excel table. From the values I created a graph and calculated a formula that would convert the analogue values coming from the module into decibel values. Then I added this formula to the Arduino script so that the decibel values would be calculated automaticlly. The second upgrade from the first sensor prototype was an added row of four LEDs which would indicate how loud the sound is; blue indicates the quietest and red the loudest. The yellow LED is programmed to switch on if the sound levels are above 85 decibel, which is harmful to the human ear. This way when the user notices the yellow or even red LED turning on, that means that the current area has problematic sound levels which could over time damage human hearing.
The resistors I used for my Arduino sensor were 1K Ω.
19
After completing the sensor prototype I also attempted a simple Excel to Google Maps data visualization, with hypothetical values, to make sure that the type of data I would be collecting from my sensor would be suitable to be sent to the next phases of the project.
20
Finaly, I went to Sonnenstraße and started to collect data.
21 map d/edit?mid=1RWc4WY4q7ni4TtN-W7jhttps://www.google.com/maps/link:5lH1eaH9JwQFr&usp=sharing
After completing my first on-site test of my sensor, I noticed a minor typo in the Arduino script. Namely, for receiving the position I had accidentally given the same latitude command twice to the GPS module, instead of latitude and longitude. Subsequently, I fixed this issue so that next time I would acquire correct location coordinates.
22
23
24
25
The next step in developing the project was to sketch an implementation strategy for the visualization and interaction phase. The visualization would follow in Grasshopper (Rhinoceros) and the interaction in Unity.
26 Implementation Strategy
Now I was ready to begin the final phase of the project. Firstly, I looked into the boundaries of the area of the city that was provided to us in the form of the digital twin model of Munich. Afterwards I went on-site and began collecting data using my sensor. The data was saved as a .txt file, imported into Excel where it was exported as a .csv file, and then imported into Grasshopper.
27 Prototype
28 With the gathered data I succesfully completed my first Soundscape visualiation in Grasshopper. For the visualization I used a color-coded surface where green would indicate low sound levels, yellow medium, and red high, as wel as isolines. The Soundscape is built through the collected points, where the x and y coordinate (position) is given by the latitude and longitude, and the z coordinate (height) by the decibel value of that point location.
29
While trying to link Grasshopper to Unity, I sometimes ran into the challenge that the geometry wouldn‘t display. This issue was solved by flipping the face normals of the objects.
30
One important thing to mention is that I noticed the digital twin model buildings‘ mesh was too heavy and complex, which would unnecessarily overload both Grasshopper and Unity. That‘s why I decided to clean and simplify the mesh. At first I baked the mesh into Rhino and joined it into one mesh using Rhino tools. Then I checked if the face normals were facing the right way, by assigning a different color to the faces facing inwards and outwards in the Rhino display options. Following that using the ‚Direction‘ command in the ‚Analyze‘ tools I flipped the faces so that the majority of them was facing the
Subsequently, I also completed my first Unity interaction test, where the Grasshopper definition would be linked directly to Unity, thus offering real time experience of the Soundscape. This linking is crucial, if the user wants to input another collection of sensor data, where the Soundscape would automatically adjust accordingly.
31 right way. Then I imported the mesh int MeshLab where I ran it through some ‚Cleaning and Repairing‘ filters, the most important being ‚Repair non Manifold Edges by removing faces‘ ‚Remove Duplicate Faces‘ or ‚Remove Unreferenced Vertices‘. With this the number of faces and vertices was reduced immensely. I also tried filters from the ‚Remeshing, Simplification and Reconstruction‘ menu, but they did not produce good results as the mesh geometry was open, as well as there were vertices facing the wrong direction which proved too challenging to correct. In the end I decided to run the mesh only through the ‚Cleaning and Repairing‘ filters, export it as and .obj, import into Rhino and finally reference as a mesh in Grasshopper. I also tried to internalize the data into the Grasshopper Parameter Mesh Input, but found out that it‘s less heavy and quicker if the geometry is simply referenced.
32
For the final visualization I decided to change the colors into vibrant turquoise and red for low and high decibel levels respectively, and dark muted purple for the average values. I wanted the extreme values to be easily readable and the middle intensity values to modestly blend in with the background. This gently vanishing appearance would offer the Soundscape body and emphasize the extrema.
33 For convenience of understanding I color-coded the Grasshopper definition, as grouped and labeled the components. Components marked in circle groups especially indicate user input or interaction. The components communicating with Unity are scripted C# components. For the Grasshopper-Unity linking to work, it Rhino 7 version is needed.
34
3535
3636
3737
The following levels of the game are not linked with Grasshopper in real time. In the second level the user is presented with the same Soundscape, only this time represented through teal colored isolones. The city model is given a dark uted material so as not to fight for attention with the columns which are the highlight and new information provided in this level. The height of the twisted columns indicates the decibel levels of the location they are located in. If the user clicks the columns, they will lead into different alternations of the third level of the game. The third level comprises elements of a sound model.
38
Regarding the Unity game I included a toggle for the user to be able to turn the Soundscape on or off, as well as be able to switch between different modes like ‚Day Mode‘ or ‚Night Mode‘ which would visualize data collected on different times of a day. This slider could easily be modified to show more modes, also change the name of the modes into something more meaningful for the user. For example instead of showing different day times, the prototype could be used to depict different months or seasons depending on the implementation and time the data was collected.
39 In the third level the user is able to listen to separated frequences which comprise the sound. This way depending on different implementation scenarios like if the sound levels are too high and some intervention is needed, or if some specific sound analysis needs to be conducted in a certain area, etc the user is offered a broader perspective and a more detailed overview of what constitutes the collected sound in a chosen location. Sometimes it‘s hard to understand what the collected decibel values really mean, that‘s why this combination visualizes sound in a different way, and thus creates another kind of a Soundscape. To make my research more tangibe I have chosen four locations in the area of Sonnenstraße where the road intersection at Sendlinger Tor and the park alongside the Herzog-Wilhelm street contrast each-other the most.
The way I separated the frequencies was through sound editing in Adobe Audition. As raw data I went into the chosen locations and recorded a 30-35 seconds long audio file with my phone. I then imported each of these audio files into Adobe audition, and under the ‚View‘ menu I selected ‚Waveform Editor‘. This way I would be able to observe the audio files in a more visual way. In this overview it was possible for me to notice sound patterns and intuitively understand if certain frequencies were coming from birds, car horns, sirens, people talking, ect. Afterwards just like in a photo editing software I cleaned up and separated freuquencies belonging to different sources.
40
entire
Maybe in the future this manual process could be facilitated through an AI that recognizes these frequency patterns and is able to separate them automatically. park audio birds only park audio people talking only
41
entire
42
One major challange that I faced and could be improved on is the sensor prototype. Although I was able to collect data, the modules feeding my sensor (especially the sound module) were not always sensitive or precize enough, thus often yielding to inconsistent data.
Reflection and Outlook
In the end I really enjoyed the process of developing this prototype, and if given a chance in the future, I would gladly like to upgrade and optimize the Soundscape project.
One reason for that could be the temperature dependence of the sound sensors internal resistors. Whenever I calibrated the decibel conversion for the sound module in a closed warm room, and then went outside to gather data, the values would jump and show much higher than in actuality. This happened because outside was colder. One way to tackle this complication could be to add a temperature module. By monitoring the temperature of the surrounding environment and adjusting the decibel measured data to the occuring temperature values, it may be possible to correct the information collected through the sound module. This calibration could be achieved by adding formula to the Arduino script that adjusts the decibel values. To calculate the appropriate fomula sound and temperature data would need to be collected in different temperature environments.
Another reason for faulty data was the wiring of the sensor. The entire linking was done through jump wires to breadboard. This means the wires were prone to being asily disconnected and resulting in defective contacts. The solution to this issue would be soldering the wires to a circuit board.
43
Other aspects where optimization could be introduced to would be processes that require manual work, like separating the sound frequencies through an AI. Also, instead of the user having to walk down the streets carrying the sensor, it could be imaginable to automate this procedure by building several data-collecting sensor stations which could be set to record or stay idle from a control room.
Mentions
The workflow and code connecting Grasshopper to Unity in real-time was based vastly on the work of Junichiro Horikawa. He is a japanese architectural programmer/designer, who specializes in algorithmic/parametric design in the architectural context and developing applications for VR/AR/Games/Web Systems/Interactive Media in the IT field. Junichiro Horikawa‘s main interest is crossing these two fields together for better design possibilities and system develpoments, which is a great inspiration to me.
44
45 Contact Agi 7th03716745HidriSemester Architecture B.A.