Signal/Noise CW 31/2017
Signal/Noise is a weekly collection of commented articles and essays that we deem worthy of your time.
Lets get excited about Maintenance
Apart from the practical problem, in Los Angeles, of creating a tunnel system in a region known for geological instability, Mr. Musk’s idea indulges a fantasy common among Silicon Valley types: that the best path forward is to scrap existing reality and start over from scratch. With urban transport, as with so many other areas of our mature industrial society, a clean slate is rarely a realistic option. We need to figure out better ways of preserving, improving and caring for what we have.
We overvalue innovation and undervalue maintainance despite that the later is providing more value to more people. This is especially emblematic for the US, where, as the authors points out, this creates enormous issues when it comes to infrastructure.
In Germany, we are at a different impasse. While our infrastructure is far from perfect, it is nowhere close to being in the same state as in the US. Unfortunately we do not have an alternative narrative to the digitization and tend to overvalue the innovation brought to us from the US. This leads to us losing the relationship to the things that Germany does well. It also prevents us from adopting and developing our own narrative.
Are Digital Technologies Making Politics Impossible?
If technology is for helping us to make our lives better, why would we tolerate a system that is fundementally designed not to do those things.
It’s like if you had GPS and every time you used it took you t@o the next city or the next country. You would never continue using that GPS.
So why do we tolerate that from systems that are navigating us not through physical space but through informational space? I think we should hold them to the same standard.
– James Williams, former Google employee, doctoral candidate researching design ethics at Oxford University
RSA Events is generally a very recommended podcast, but this episode with James Williams is great even by their standards. There are many avenues into a substantial critic of the work that Silicon Valley Corporations are performing, but very few are as effective as by someone who used to work in one of those corporations and approaches it from the perspective of Design Ethics.
He also triggered a line of thinking in me that I have been playing out in my head for a while. What if there is no such thing as Design Thinking / User Driven Design in Silicon Valley. Those companies aren‘t actually good at designing what their users want, they are very good at designing what they can make us use.
Data-driven vs. user-driven is the key here.
In an attention economy, the user doesn‘t need more stuff. We are already beyond our capacity to deal with the never ending amount of tools at our disposal.
A.I. versus M.D.
That’s the bizarre thing about neural networks,” Thrun said. “You cannot tell what they are picking up. They are like black boxes whose inner workings are mysterious.
The “black box” problem is endemic in deep learning. The system isn’t guided by an explicit store of medical knowledge and a list of diagnostic rules; it has effectively taught itself to differentiate moles from melanomas by making vast numbers of internal adjustments – something analogous to strengthening and weakening synaptic connections in the brain. Exactly how did it determine that a lesion was a melanoma? We can’t know, and it can’t tell us. All the internal adjustments and processing that allow the network to learn happen away from our scrutiny. As is true of our own brains.“
“A deep-learning system doesn’t have any explanatory power,” as Hinton put it flatly. A black box cannot investigate cause. Indeed, he said, “the more powerful the deep-learning system becomes, the more opaque it can become. As more features are extracted, the diagnosis becomes increasingly accurate. Why these features were extracted out of millions of other features, however, remains an unanswerable question.” The algorithm can solve a case. It cannot build a case.“
We talk a lot about the transformative power of machine learning, deep learning or AI, but most of it happens, astoundingly, on the level of implications toward work or how we will organize ourselves as societies when the power of computing will reach a point in which a significant part of what we do right now can be taken over by machines.
And yet, I think we do not talk enough about the just staggering fact that we, as humanity, can already create machines that perform certain tasks better than we ever could without understanding how they do that. I find myself unapologetically in awe every time I read about this.
As good as this New Yorker article is – it’s really, really good –, I’m actually surprized that it is not about just this aspect of our ability alone. We create machines that are black boxes to us.
Why do we do this?
Thrun blithely envisages a world in which we’re constantly under diagnostic surveillance. Our cell phones would analyze shifting speech patterns to diagnose Alzheimer’s. A steering wheel would pick up incipient Parkinson’s through small hesitations and tremors. A bathtub would perform sequential scans as you bathe, via harmless ultrasound or magnetic resonance, to determine whether there’s a new mass in an ovary that requires investigation. Big Data would watch, record, and evaluate you: we would shuttle from the grasp of one algorithm to the next. To enter Thrun’s world of bathtubs and steering wheels is to enter a hall of diagnostic mirrors, each urging more tests.
This is a classic example of a very smart, white guy in Silicon Valley who envisions scenarios that are never quite bad, if you don‘t start asking questions about how that scenario of the future might actually work.
I do not have any objection of being augmented by reality that will help me become and stay more healthy. That is never an issue and is absolutely a statement that I would want to see happen. But I also would want to have a clear understanding of how this technology does it, who has build this technology and how they intend to profit from it beyond the services that they provide to me. Or how both my own and foreign governments can access this kind of data.
Our friend, Molly Steenson, recently gave a talk on AI that provides great perspective on the various narratives that are currently being floated around. Recommended.