Recently I launched, Hot Pop Factory, a new collection of parametrically designed, 3D printed jewelry with fellow designer Bi-Ying Miao. We’ve been toiling away for the last several weeks perfecting the collection, in order to get the best possible results out of our Makerbot Replicator. The result was an endless litany of prototypes as we honed in on the final designs.
3D Printing turned out to be a massive boon for this process. We were able to iterate the design at full wearable scale essentially in real-time. This provided us an intimate understanding of the design process which allowed us to carefully refine the final pieces. I wrote a bit more indepth about the process on the Makerbot Blog.
In addition to 3D Printing, we also relied on the parametric design tools we use in architecture to develop our unique take on jewelry design. In particular we used Rhino3D and the Grasshopper plugin (seen below) in order to derive the final forms.
Overall the project has already been a tremendous success. We’ve been fortunate to have some early sales traction and the feedback from our customers has been great. Bi-Ying and I are extremely proud of the final collection which you can view below or in our online store. We’ve got a lot of ideas about how to grow this project and broaden its vision, if you are interested in following along be sure to visit the Hot Pop Factory Blog.
I recently taught a workshop on physical computing to architecture students at Ryerson University in Toronto. The course was for 30 people and lasted two days. I designed the course to help students learn how to combine physical sensors and actuators with the digital design tools they were already using in their architectural design work.
The first day covered a basic introduction to electronics and then familiarizing students with the Arduino hardware and IDE. We had a number of of sensors and basic electronics components like LEDs and potentiometers at our disposal. We combined them in various ways familiarizing everyone with the essential ideas of input and output in both digial and analog using the Arduino microcontroller.
The second day focused on how to combine the lessons learned in the first day with other applications like Processing and Rhino/Grasshopper. These lessons were definitely the most interesting as they had very obvious implications for how they could be applied to student’s architectural design work. Some simple examples from these lessons are visible in the videos below.
This has been the largest course I’ve prepared and delivered so far. The experience was very educational, even for me. I have a whole now appreciation for the amount of preparation and care that goes into designing a course that is digestible for variety of people with varying aptitudes and skill levels. Striking a balance during the presentation between covering all the content, delivering it in an engaging way and keeping an appropriate pace for a large group of people is definitely an art form that will take a lot of practice to master. Overall the course was very well received, with lots of positive student feedback. As I surveyed the room throughout the duration of the workshop everyone was able to keep up and implement all of the exercises. I was thrilled to see many people going beyond the basic material and applying the lessons to their own projects and experiments.
I spent this weekend teaching 3D printing at the first Ladies Learning Code 3D printing workshop. Ladies Learning Code is a fantastic Toronto based non-profit, whose mission is to empower people by teaching them technology skills. I had worked with Ladies Learning Code previously, to help mentor one of their Photoshop workshops, and was impressed by the diversity of ages and backgrounds the event attracted. Needless to say, when I heard they’d be running a similar workshop, to teach people about 3D printing, I was thrilled.
The event was structured in two parts, the first taught participants basic 3D modeling skills with Meshmixer, the second allowed them to bring their designs to life by 3D printing them.
Last week, local charity, The Stop, held their first annual Night Market. The event paired local chefs and designers together to create a novel dinning experience evocative of food markets from around the world. I was invited to design and fabricate a Night Market cart for The Bellevue restaurant.
Designing a cart that paired the hip, laid-back style of The Bellevue with computational design tools was a fun and interesting challenge. The product was an intricate, lightweight, structural canopy system that floated above an understated base.
The cart turned out to be a perfect match for The Bellevue and the event was huge success for all involved. Next week I’ll be following up with a detailed breakdown of the cart’s design and fabrication.
Recently a client tasked me with designing the ceiling for a renovation at a large university. They wanted to construct the ceiling out of wooden baffles using a layout that appeared random. The big catch was that the budget didn’t allow for the custom fabrication of several thousand square feet of ceiling space. Consequently the challenge was to build the entire ceiling by repeating a single 2′ x 8′ panel that would hold 6 wooden baffles.
Although I was constrained to the use of just one panel I was able to customize the panel arrangement across the ceiling area. In order to avoid large sections of repetitive panel layouts I wrote a small program that laid out each panel in linear rows and then offset each row at random 2 foot increments.
The final step was to design the actual panel. Normally this work is constrained to professional CAD and visualization tools. I thought it would be fun to write a simple web based tool that captured all the salient features of the problem. You can play with it below:
The web tool is fun because it allows project stakeholders to engage with the design process without the massive burden of learning a professional toolset.
Finally to complete the project I produced physical samples of the finalized design to verify that they actual maintain the visual cohesion that appears on the screen:
After the initial set-up and a lot of tinkering I’m starting to get some awesome results from the 3D printer. I made these two complex figures to better understand the limits of what the machine can do. As you can see below the results are already pretty impressive:
One of the benefits of 3D printing is that you can achieve forms that aren’t always possible with traditional manufacturing techniques, this makes for some novel photo opportunities.
Both objects were printed with ABS plastic. Surprisingly the black ABS makes the manufacturing process a lot more evident. The specular highlights strongly reveal the striation as the plastic is printed layer by layer. This results in a really interesting “velvety” texture across the surface of the object.
I was also very impressed with the resolution of the prints. I was using a layer thickness of about 0.27mm, as you can see in the photo below this makes for a pattern that is slightly denser than a human fingerprint.
The long wait is finally over… that 3D printer I won’t shut-up about? It has finally arrived! Watch as the gripping tale of me opening a box and removing something is retold with a vivid photo montage:
To all my friends who have to endure me endlessly rambling on about this thing, I ensure you… the obsession has only just begun!
As part of a submission to Bracket’s Latest Issue, my research partners and I decided to supplement our paper by visualizing traffic patterns in a dense urban environment. I decided this also provided a great opportunity to begin learning Python. Coming from a design background my programming knowledge is quite limited and therefore I was very much looking forward to the challenge of learning a modern language like Python.
Our specific goal was to map how traffic volume changes over a period of 24 hours in dense urban centers. We began the task by seeking a good data source. Surprisingly, hourly traffic data for major cities was fairly difficult to come by. After scouring various open data resources we eventually found the website for
Naturally, the downside in dealing with raw data is that it comes in massive quantities, formatted in ways that are not necessarily easy to work with. The total dataset was about 500MB encoded as CSV files. Based on how they provided the data we were able to manually obtain a subset of about 60000 entries to work with. Clearly a programmatic approach would be needed to parse our subset into a suitable format for visualization.
For several months I had been dabbling in the Processing programming language in order to create visualizations. Although I intended to use Processing once again to produce the final graphics for this project, it wasn’t the ideal candidate to parse the raw data into a usable form. Having wanted to learn Python for some time, I decided it was more appropriate for this part of the project.
My first step was to determine what data I actually needed for the final visualization. The primary element was obviously the actual hourly traffic counts provided in the Department of Transportation dataset, but I also needed to associate those with some spatial data in order to construct a graphical map. Our source data only had pieces of the necessary information which I would have to stitch together and cross reference with other sources using Python.
The raw traffic count data was composed of about 25000 lines that looked like this:
The first number is a unique ID that is assigned to each road in New York State, what follows is the date and time of each entry and the the final number is the actual traffic count for each lane:
At first glance the data seems pretty limited. This is because the ID number represents a particular road which can be determined by cross referencing the ID with a master header file. The header file contains about 36000 entries that look like this:
"710046","NY"," 3",,"CORNELIA ST","244.87","PLATTSBURGH W CITY LN",
The breakdown is as follows:
This process was pretty simple even for a python noob like myself. Using Python’s built in CSV library I looped through the traffic count file, and each time I encountered a unique ID I would loop through the header file to find the corresponding line. I saved the cross referenced data to a new CSV in the following format:
041003,HUDSON ST,CHRISTOPHER ST,BETHUNE ST
This provided me with a much reduced dataset that included only the headers that were relevant to the area I was mapping. I saved the ID number, road name and the starting and ending intersections. I would use these later to determine the precise location of the traffic data.
After creating the reduced header file I took a closer look at the actual traffic count data. Originally, I had naively assumed that the data for each street would be recorded at consistent dates and times. In reality it was much more of a hodge-podge; various streets were monitored at differing time periods throughout the year, some had complete records, others were spotty. This presented a problem for us because obviously we wanted to map a comparable data set. Since this was still the best data source we could find, we settled on a compromise. We decided to only take complete records of a continuous 24hr period and where possible to ensure that the data was recorded on a Wednesday. In the cases where Wednesday was not available we would take records from the next closest day. Although this wasn’t the ideal situation for the project it did make for an interesting programming challenge.
I found this task more difficult than the first, but still manageable. I started by looking closely at Python’s datetime library. It took some tinkering and a lot of trial and error but eventually I was able to format the dates correctly and return an integer based on the day of the week using the
strptime() methods. I was then able to use this information to rank each set of entries based on their proximity to Wednesday. Checking if the set of records was complete for a given date, was just a matter of ensuring that it had the correct number of entries. I outputted this information to a consolidated traffic count file.
At this point, the Department of Transportation data was in manageable form, though crucially I was still missing location data. The only information I had to work with were the road name and start and end intersections which I had previously saved in the consolidated header file. This part definitely provided the most exciting challenge. My goal was to get the latitudes and longitudes for these intersections in order to construct a map. I was aware of the existence of several geolocation services, but I was uncertain as to how robust they would be in terms of parsing my search queries. I also had never written any code that interfaced with external APIs or web services.
The first service I tried was geocoder.us. Working with their service was pretty straight forward. I simply had to craft a URL with my search query and then parse the information that geocoder.us returned. Once again, Python’s standard libraries made this process fairly simple. I was able to craft the URL by using my compiled header file with some string concatenation and replace. I then passed it to
urllib which would return the data from the geocoder service. Geocoder.us allows you to specify the return format in a number of different ways, I chose CSV since I was already pretty familiar with it at this point. The downside to their service was that its interpretation of my intersection names was not very robust and the service only allows one query every 15 seconds. Ultimately I was only able to retrieve latitude and longitudes for a small part of my data set using this service.
Since geocoder.us was not up to the task, I set out in search of a new geocoding service. Eventually, I found Yahoo! GeoPlanet which turned out to be fantastic. Its search was much more robust and didn’t have a rate limit. It did have a couple added complications in that I had to have an API key and the data would always be returned in XML format. Getting the API key was a painless process, I just had to go through a fairly standard online signup process then append the key to my queries and working with the XML data wasn’t too much more difficult than working with the CSVs after I found the appropriate libraries. Ultimately, I was able to obtain good latitude and longitude information for the majority of my dataset using Yahoo’s service.
Now that I had all the necessary information to complete the visualization I created a nice clean CSV file that associated the traffic counts with the appropriate latitude and longitude. I was able able to easily read this into Processing in order to generate the necessary graphics. Some of the raw output from Processing can be seen below:
Even without placing the visualization over an existing street map you can make out the rough outline of lower manhattan fairly easily. We were later able to use this material to create animated and interactive versions of the data as well as visualize it in 3D.
For me, this project was tremendously rewarding. Most of the work I did was quite basic, but it was very empowering to be able to retrieve and integrate large amounts of data from the web in useful ways; it’s definitely a skill I’m looking forward to implementing in other projects in the future. I won’t embarrass myself by posting the mess of spaghetti code I wrote to get this all done; certainly there were more elegant and concise approaches to the problem. Despite this I was able to achieve all the things I had hoped to do and managed to learn a lot of basic Python features in the process.
I received word today from MakerBot, that my order for their new product, the Replicator, is delayed. Although the e-mail I received refers specifically to my order, I believe this likely applies to all Replicator orders, as my order was placed just after the original product announcement. This isn’t entirely unexpected, as MakerBot is a young company, with a new and experimental product. Nonetheless, I am still a little disappointed, maybe because I’ll have to hang on to my excitement a little longer… or maybe because they decided to take cash up front. Their e-mail didn’t include a new estimate for an updated lead time, I’m hoping MakerBot Support gets back to me on that soon. In the meantime their website is now showing an 8 week lead-time instead of 6 for new orders. Here is the full text of the original e-mail:
Update 1: According to MakerBot support “worst case scenario” is an additional two week wait.
Update 2: It took awhile but my Replicator has finally arrived.
Hello from MakerBot Industries!
We’re running late with your order for a MakerBot Replicator. We’ve put together a testing program for each MakerBot so that they’ll leave the Botcave in tiptop shape and it turns out that doing things right takes time. In this case, it’s taking a little more time than we expected.
We’ve added a swing shift to deal with the increased demand and we’ve implemented additional Quality Assurance processes to ensure that your Replicator will work as intended when you receive it. Instead of rushing orders that may not be ready out the door, we’re testing these first Replicators with special care.
We thank you for your patience with us during this process. We know it’s tough to wait, but we know you’ll be glad that you did.
Thanks for giving us a little more time with your order.
If you have questions or comments, we’ll be here to help at firstname.lastname@example.org.
CEO MakerBot Industries
Earlier this week, while attending an electronics class at hacklab.to, I had the pleasure of meeting Michael Woodworth, founder of Upverter. Upverter is a local Toronto startup, and Y-Combinator alumi company, that provides a platform for social hardware design. The core of their product is an online tool that allows users to collaboratively design schematic circuits for their projects. Instead of struggling to describe the details I recorded a short demonstration of my own:
Here is the embedded schematic of the project in the video:
Seeing software like this popping up is really cool for several reasons. First of all, the pace of innovation is greatly accelerated by saving all of our work in a common pool. Upverter, like many new web apps, makes everyone’s projects searchable. This means newcomers to the field immediately have a library of thousands of practical examples to learn from. Additionally projects can be accelerated by “forking” other people’s designs. This essentially allows you to use another user’s work as a base from which to create derivative works. Finally, because the entire app is ‘social,’ new techniques can become memes and spread quickly among the product’s users. Disseminating knowledge through this new medium allows the state of the art to advance much more rapidly when compared to traditional ways of spreading knowledge. All of these attributes are over and above the obvious benefits of working in a real-time collaborative editing environment with a live database connected to real parts and manufacturers.
Of course, the downside is that as we migrate more of our applications to the cloud, we begin to lose control of our infrastructure and critical tools. Right now Upverter is a small start-up run by idealistic hackers. You can easily download your work in common formats free of charge. But one must wonder, what happens as their company grows and maximizing profits becomes the dominate motivator? Will they still risk their users migrating to another platform by making your files too easy to retrieve? Will they stop supporting a feature that is crucial to your workflow midway through a project? Will the company go under leaving you without access to your entire library of work? These are just a few of the risks we face by allowing our software to reside in a centralized system far from our local machines.
After seeing projects like Upverter I’m highly enthusiastic about the future. Just as our workflows were revolutionized when we began migrating our tools from physical space onto computers, they will again be revolutionized when we migrate from local harddrives to the web. There is tremendous opportunity for discovering better ways of working but also new risks as we begin to lose control of the tools we rely on.
For awhile now I’ve been looking forward to starting an arduino based project. Having already done the very basics, like blinking LEDs, I thought I’d embark on a simple starter project. I picked-up the LoL (Lots of Lights) Shield from a local hackerspace. The kit comes with a PCB (Printed Circuit Board) and a crapload of LEDs. Once fully assembled you can control each LED individually allowing you to display images, text, games or whatever else you can think of. As an introduction to electronics I certainly got more then I bargained for in terms of soldering practice:
View Part 2: Here
I’ve come to think that there exists an interesting dichotomy between growth and robustness in most systems. For the purposes of this post, growth means receiving a positive return on some investment of energy or resources, robustness refers to the ability of the system to sustain itself in a changing environment.
At any given instant, a system has a fixed amount of resources it can allocate to various tasks. Fundamentally, there are two options for how to use these resources: they can either be allocated into familiar areas that are already demonstrating growth, or they can be allocated into unexplored areas in hopes of finding new sources of growth.
A system that places the majority of its resources into known areas that are already yielding strong growth risks getting stuck on local maxima and additionally makes itself systemically vulnerable to contextual change. For instance, a society that invests most of its resources into developing a single energy source, like oil, stands to grow explosively for awhile but might collapse after this resource is depleted due to the underdevelopment of other energy sources.
On the other hand a system that distributes its resources widely, exploring many uncharted areas, will be stuck with anemic growth. This is because many times the exploratory outlay of resources will yield little or no return on investment at all. The upswing is that when resources are allocated in a highly distributed way the system is extremely robust when local regions fail or begin to yield lower rates of return because the system has many alternative sources of growth.
I’ve attempted to visualize this relationship below with an interactive demonstration:
Refresh the page if nothing is happening, click and drag to create regions of high return
In the visualization above, I’ve created a search algorithm that distributes its resources (the little black dots) using a power-law distribution. By clicking and dragging you create regions of high return, the location of which are not known to the algorithm. As the algorithm distributes its resources throughout the search space, inevitably some will randomly fall into the regions of high return (orange dots). The algorithm will begin to dynamically reallocate its resources such that it has a higher probability of creating new dots near ones that have previously yielded a high return. In order to make the environment dynamic, you will see that as more resources are allocated to a region of high return, the size of that region will slowly decrease until it eventually disappears. Crucially the algorithm always maintains a power-law distribution regulating the breadth vs. depth of its search allowing it to dynamically reallocate its resources into new high return areas after explored ones have been depleted.
I find that many of our approaches to problem solving are concerned with optimizing for singular outcomes. In the future, as we attempt to harness the power complex adaptive systems for ourselves, we will become much more concerned about how to distribute resources instead of how to optimize for a single purpose. By carefully understanding the relationship depth and breadth in our resource allocations, we can begin to create dynamic systems that are able to adapt to change while managing our risk profile much more tightly. Hopefully, this can begin to reduce the probability of “black swan” events (like the mortgage crisis) and other existential threats to our society.
Earlier this week I purchased a consumer 3D printer. It was a big investment and in a later post I’ll take some time to review why I believe it will be worthwhile. In the meantime, I want to do a quick run-down of the products that are emerging in this space while the research is still fresh in my mind.
For those who are not familiar, a lot has happened in the last few years in the world of 3D printing. Previously 3D printing was strictly a commercial endeavor due to the high cost of hardware (upwards of $60,000). In the last five years hackers have started cobbling together do-it-yourself 3D printer kits; extremely primitive versions of their commercial counter-parts. These kits effectively work like computer controlled glue guns. Open source software deconstructs 3D digital models into horizontal slices and this information is sent to the printer. The printer uses an extruder to heat up a plastic filament. The extruder is guided by servos in 3 axes allowing the filament to be deposited layer by layer gradually reconstructing the object. What is so revolutionary about these kits is their low cost. At around $1500, a tiny fraction of the commercial versions, the technology is now within reach of individuals. Over the past year this tiny cottage industry has begun to mature into actual companies with fledgling products. Below is a run down of who the players are:
The RepRap is the printer that got this whole revolution under-way. The project began at the University of Bath back in 2005. The ambition was to create a 3D printer that could print itself. This ambition is a long way from being fulfilled but the project is a huge leap in the right direction. The RepRap isn’t a product per se but a set of open source designs. The process of building a RepRap is extremely long and laborious. It can take several weeks to acquire its many parts and many months thereafter to fully assemble and calibrate the printer. This intensive process represents the cost of being on the bleeding edge, the upswing is that a vibrant community has emerged around the RepRap. This community is constantly improving upon the core RepRap designs and several derivative products have emerged including several of the printers below.
Maybe as importantly as the printer itself, MakerBot also owns Thingiverse. Currently Thingiverse is the biggest online community for actually sharing 3D-printable objects. In the long term this might be their biggest asset. As 3D printing becomes mainstream, and real consumers get involved, it will be a tiny minority who fire up their favorite CAD program to draft a new spatula for the kitchen. Instead consumers will gravitate towards these communities where thousands of objects designed by others already exist.
The Ultimaker has a fairly active forum too. Its users regularly share help and advice as well as their projects. If you are located in Europe, I would suggest that this is definitely the printer to look at. I ended up choosing otherwise because after the currency conversion and shipping cost the total price was similar to the dual extruder MakerBot, which I prefered.
A generation from now our descendants will be remarking on the curious etymology of the word ‘phone’. As a smart phone user, I can attest that the ‘phone’ part of my device maybe one of its more trivial features. In a sense, it’s a shame because the word ‘phone’ comes with a lot of baggage. The ‘long distance’ ‘text messages’ and ‘call display’ that we pay extra for are really just silly fabrications at this point. Even the notion of phone numbers is woefully anachronistic in a world of ubiquitous social media. Why are we using silly strings of digits when we can use meaningful identifiers like people’s names to find one another. I created the above image to capture this dissonance as it seems the word ‘phone’ will forever be the name for our personal, portable, connected, general purpose computing device and sensor array.
Over the holidays, videos of pets playing games on their owner’s iPads were making the rounds on YouTube. The videos are cute and novel but they also gave me pause…
As I tap out this post on my keyboard, a complicated device with over 100 buttons all marked with strange symbols, it becomes pretty clear that an important shift has occurred. We’ve finally refined interaction to the point where these convoluted devices are no longer necessary. Our interface metaphors are becoming so pure and intuitive that they are no longer metaphors at all. The objects flying across the screens in the above videos, although still representations, look and react just as we’d expect them to in real life. The relationship is clear enough that it does not need to be taught, in fact it is so clear that it does not even require a human to operate.
Watching our pets interact, in relatively sensible ways, with the machines we build, represents an important milestone in interaction and usability. What’s even more exciting is that this is only the tip of the iceberg. This interaction is still pretty dumb; its still a picture stuck on a plane. As ubiquitous computing becomes – ubiquitous, we can stop thinking about choosing the right metaphors in our interfaces and begin to start thinking about how to eliminate the interface altogether.
Architecture is meaning. Other than a few hundred tons of concrete, what separates the shed in your neighbour’s backyard and the Guggenheim in New York City is a sophisticated cultural narrative. Architecture is building with an intent to communicate a message. A building conveys ideas about itself: how its organized, how it should be used. A building also conveys ideas about the society that built it. These messages are carefully crafted by the architect and realized in the discourse of his critics and peers. What separates architecture from the banal is the collective discourse and understanding that surrounds the built form.
This is important because going forward, I believe we will begin to craft buildings in a fundamentally different way. The notion of singular architects, visionaries who decree clear narratives, is being eroded. As the world grows more interconnected, specificity becomes possible where generalizations once reigned. In response, hierarchical organizations are being replaced by decentralized processes. Mass customization will soon become possible in the physical world as it has in the digital one. The consequence is that broad, widely understood messages will be replaced with billions of tiny, contextualized ones. The implication for the built world will be increasingly ad-hoc processes of accretion and change. Their meanings will be more specific, and therefore richer, but increasingly impenetrable to the uninitiated outsider.
Now that information scarcity is over, all of the institutions that we have taken for granted are being revolutionized in this way. The pace of change is unprecedented. Our political structures and laws, formerly pillars of social stability, are no longer able to keep pace. I am interested in investigating the ways that we will be forced to cope with the new world that is emerging around us. As such this blog is intended to be a discourse on meaning. This is a forum for sharing my reflections on how we can comprehend a rapidly changing world that lacks fixed forms and clear narratives.