“In a world of increasingly complex computational capabilities, algorithms have become capable of simulating the human-centric design process of architecture and urban fabrics.” These are the introductory words of research meant to uncover the future possibilities of artificial intelligence in the designing process. Yet to what extent can this rapid process really take over an architect’s contributions?
Will designers’ efforts one day become futile as technology catches up with their socio-cultural, political, and aesthetic sensibilities?
Many of the passages, ideas, and perspectives shared in this article are based on an extensive debate conducted with Lebanese architect Bernard Mallat of the firm Bernard Mallat Architects, as part of an ongoing research project on algorithms in urban planning. I would like to extend special thanks to Dr. Rachid Chamoun of the Lebanese American University, and Ms. Leyla el-Sayed Hussein, for their support in this research.
We tend to have this automatic preconception that designers are here to stay, and when faced with the question of artificial intelligence simulating a designer’s process, it is a thought often brushed off as science fiction. The sensibility and complexity of the human mind, often described as mind-over-matter, is limitless in terms of the thoughts it can produce, often surpassing the physical functions under which it is housed. To simplify this statement, we can reference the existence of a certain kind of thought process that allows us to understand socio-cultural and political functions when designing cities – the spirit of the city, per se. How can a computer ever understand such abstract concepts not based on the quantifiable inputs we feed it?
But that’s the thing. They can. Not diving too deep into the research and technicality of it all, we can, at the very least, eliminate the presumption that algorithms run on a causal relationship. We inherently view algorithms as sequences, where an input feeds into a node, and that generates an output that feeds into the next node, and so on. It is considered technical, uncreative, and un-innovative – as opposed to the intrinsically and culturally sensitive human mind.
Yet when dealing with artificial intelligence, causality is not an issue. Rather than sequences of effect after cause after effect after cause, they run as networks, often stumbling upon non-quantifiable mathematics that can be described as micro-decisions. This is the algorithmic counterpart of a design dilemma, and often there isn’t one answer, but an infinite number of choices. Theoretically, with the correct influx of “big data” (the sum of all our online databases, activities, location services, etc…), artificial intelligence could have the extensive resources needed to allow it to creatively make decisions in a non-quantifiable, non-scientific way – almost as though it is designing and innovating.
Add to that the ability to simulate virtual brains for every person who has a sense of a virtual presence, and the theoretical premise is that with the correct tools and server capacities, our own online data could be beyond sufficient to create a near-accurate virtual simulation of our activities in a city. Artificial intelligence becomes capable of taking into consideration every person, their views, their motion, and their existence as a part of a whole community.
This is the – very simplified – gist of it. Lots of science and data suggest that artificial intelligence is on its way towards simulating a creative design process without compromising the functional capabilities of algorithms. And this drives us to turn our heads towards a set of very serious questions we’ve been consciously overlooking for the past few years.
- Should this technology be implemented? And if so, to what scale?
- Where should the line be drawn, between human intervention and an automated practice?
- Is an imperfect and faulty human-generated design better or worse than a much more fine-tuned automated design?
- Keeping in mind that humans have used their limited analytical abilities to abstract design into rules that are far from resembling the actual complexity and seeming randomness of nature, could human designs be considered stiffer/robot-like, and automated designs are more human and natural due to their sum of naturalistic micro-decisions?
- What are certain ways in which you see human-led and AI-led designs integrated together?
- Given that countless possibilities could be generated by AI to tackle the problems of design, does this mean that the human contribution is still necessary for choosing a “better” option? What should this be based on?
- Often the attributes that we claim separate humans from machines, such as societal understanding and emotion, are very subjective, random, and different between individuals. Moreover, our understanding of them is limited at best. Are there any attributes that could really be considered a unique and necessary input of the human mind?
With digital progress rising exponentially, rather than at a constant speed, the theoretical debate on AI’s implementation into the world of design must begin now. And a great place to begin was with practice professionals such as Mr. Mallat.
With over 2 hours of coffee table discussions, there was a general resignation to the inevitability of it. “Whether it should or shouldn’t, it’s going to happen,” says Mr. Mallat. “There’s no use of speculating if it should – it will. To what degree, however? I don’t know, could be a variation of what you described.” The gist here was that we’ve seen this process happen in transitions from hand drawings to CAD modeling, to BIM, and now to scripts and algorithmically generated designs. It’ll take over just as every other technological advancement in the field of design has.
Scripting is widely used now in an environmental, structural, or physical context, with designers focusing on social aspects themselves. It is in technology’s ability to truly grasp the essence of these social dynamics that the debate lies – currently, it is difficult for a computer to implement the cultural feats we experience as humans into a project. Mr. Mallat references a project of his(currently under construction), which utilizes the long-hauled Lebanese vernacular tradition of tying ropes to baskets and using them for vertical transportation, as an idea for an interactive façade that provides residents with that function.
“In our typical social behavior, we have the basket to buy groceries. Electricity cuts off and you put a rope with a basket to move goods. This is an implementation and understanding of a simple [yet relatively complex] social aspect. This is very difficult for a computer today to do on its own. This is why practices that currently rely purely on scripting are limited to environmental or structural aspects.”
There is a general agreement that what currently holds our practice together is the lack of true generative social awareness by algorithms, yet seeing as research indicates this social awareness could one day become inherent to artificially-intelligent programs, clear-cut answers and the confidence in our position, vaguer.
“I’d like to think as an architect I would not become obsolete,” says Mr. Mallat. “But if I do become obsolete then that’s what happens.” This game of catching-up becomes a waiting game, and if one-day designing becomes an algorithm’s job, then there’s only one practice left for architects – theorizing.
However, as computation has already taken over many mathematical – or to better put it, objective – fields much more rapidly, such as banking and accounting, it is seen that it’s more difficult with subjective domains like architecture and urban planning. “Those jobs would disappear long before ours does because ours involves the human factor.” However, with computers promising a future where they assimilate our subjectivity, we may become obsolete.
“I would say humans would become somewhat of a judge.” Through these projected programs’ abilities to utilize generative design to generate thousands, and even millions of unique design solutions, architects specifically and people generally could be assigned the job of judging, rather than leaving that to the software itself. It’s a democracy of potential solutions. “In the end, they’re gonna live in it. Not the computer,” says Mr. Mallat.
However, perhaps the democracy of choice could be limiting us from achieving the ultimate urban potential. Yet in theory, algorithms cannot be purely objective. As Mr. Mallat puts it, “[Algorithms] are based on humans and there are biases built into the systems – the technology you’re proposing will be studying us, and will have decisions that involve our own biases.” If the algorithm’s reference would be our own generated sums of big data, then perhaps in the process of gaining cultural sentience, artificial intelligence could build human-like biases, too.
As for the question of aesthetic and visual expressions, it becomes a field of interest in our urban cityscapes’ relation to nature. Architecture as seen today still imposes itself on nature, yet much of it attempts to blend and integrate with it. “Personally, I don’t like the works of Greg Lynn, it looks very organic, like plants almost,” says Mr. Mallat.
To explain, we are experiencing a transitionary period where the non-natural (not unnatural) is slowly integrating with nature. And the human reaction to this has been quite interesting. We inherently have no problem staying in a room which contains a box-shaped computer, yet many studies have shown how people find computer-generated humans almost disturbing. It makes us uneasy when these boundaries are blurred between what is of nature and what has imposed itself on nature. The problem arises when you can notice both in one entity.
This is where algorithmically-generated designs are at now. “Maybe if [Greg Lynn’s architecture] reached a spot where it is more and more organic, functioning like nature until it is seamlessly fully integrated within it, then I’d accept it, it wouldn’t feel off.” Of course, with the introduction of self-healing materials and living facades, we may one day have something similar and indistinguishable from nature, which could break this uneasy barrier we currently experience. Yet this process, with all the unease it promotes, may be necessary.
Perhaps it is best explained through Mr. Mallat’s following input:
“The things that we like are different to an extent, but not too different. We like things that are unique yet true to our present taste – perhaps an exaggerated feature. And gradually, taste adapts. If 20 years ago you had created a phone that resembles a slab like our phones do today, people would’ve hated it. You cannot design too far into the future. You appreciate variations on things that exist – we need a reference similar to what’s there. AI has to realize that it cannot give us a solution that is far too advanced for us to accept it. It needs to slowly evolve with us. The way I’d like to put it; the design is like a joke. The structure of the joke starts with a setup, and eventually the punchline is perfectly logical once you’re supplied with enough information; you cannot begin with a punchline. Yet the best jokes are unexpected. Innovation is when you give a perfectly logical yet unexpected solution. If it’s completely radical and doesn’t make sense – at least not yet – you cannot enjoy it.”
Finally comes the discussion of human interpretation and conflict. With references to pre-modernists’, modernists’, and post-modernists’ ideas of truths, we find the value of interpretation. “Derrida applied deconstructivism in literature. If you analyze a text, you realize the literature is based on the writer’s background. If you deconstruct the text and rewrite it without his biases, it’s the same text rewritten but only with the new writer’s bias. You can infinitely keep deconstructing things out of their biases [to no true avail].” Post-modernists believed in multiple truths, and we remain to live in a seemingly post-modernist era. We have multiple interpretations of the same things, based on our own biases. How does a computer take into account transformation, varied opinions, social norms, and behavior? Context changes and so does society, so would these designs be able to change over time? Is it possible for them to adhere to multiple truths? “If [our cities] advance into a natural lifeform, then maybe they can synchronize with people and change.”
This drives forward the concept of adaptive reuse, based on humans adapting architecture overtime for their own uses, and abandoning others. What they also do is create friction. “People create friction and friction creates,” says Mallat. “When everybody agrees, nothing changes. Ideas are conflicting, how do you create artificially generated designs that have to adapt to paradoxes? We design because we are unsatisfied, and that drives us to evolve.” The notion is that, when discussing artificial intelligence’s ability to one day generate the ultimate utopian city, we would still not be satisfied, and would seek more. Being satisfied is beyond human imagination. Our perceptions of good designs evolve with time, just as the perception of beauty has changed from 19th-century paintings to our present-day fashion catalogs. “Whatever the computer is going to design for that society, even if it’s adapted to it in that moment of time and neighborhood, that will eventually change and be un-adapted.”
It is therefore imperative to seek adaptability in developing the future of artificially intelligent cities. Until the day comes where physical architecture can imitate virtual architecture in its morphing capabilities and transformative possibilities – becoming an animated and adaptive environment towards its inhabitants –algorithmically generated urban fabrics will only ever be near-perfect.