“Plato imagined philosopher-kings guarding his utopia. Here in Aspen, a modern day utopia, we have Bill Gates…” (The Atlantic, 2010)
My most recent post on technology discussed what it might mean for a Christian working in cyber security to commit to helping those who are weak and oppressed. In it, I attempted to draw a picture of the weak and oppressed in this field: they are those who are unable either to communicate without being spied upon, or to protect their information and data.
I also argued that working in the field of security does not mean that you are helping the weak and oppressed, just as its true that simply by working as a doctor does not guarantee that you are practicing a healing form of medicine. In the case of security, the injustice in the field stems from the fact that security technologies can be used either to help people find their voice, or it can be used to oppress it with even greater efficiency than ever before. Towards the end of the post, I briefly mentioned that this notion of helping the weak and oppressed can be hard to do in practice, because usually money or success – or both – is often on the line.
This idea – that in the business of security, a righteous decision to help those who are oppressed is often in direct conflict with profit or worldly success – is at its heart an issue of business ethics, and so I want to expand a bit on business ethics in the technology sphere in this post.
Developing an ethic around technology is elusive, partly because it’s so difficult to understand or foresee the implications of technology once it’s out of your hands. Most technologists would argue that this simply isn’t their problem: leave the ethical questions to the philosophers.
So, rather than struggle with these issues, a task which, to be fair, requires as much philosophy as it does technical knowledge, the approach for virtually the entire post-Industrial period has been “shoot first, ask the ethical questions later.” But that’s starting to change: those in the technology sphere are finally realizing that the pursuit and promotion of technology simply for its own sake can indeed be a great good, but also a great evil, and that an ethic of sorts to govern technology is needed.
“Don’t Be Evil”
An interesting take on business ethics, and in particular on business ethics as it pertains to technology, comes from Google’s well-known slogan: “don’t be evil.” Despite the flak that Google and the rest of the tech industry has taken about invasions of privacy during the past few years, if you think about it, it’s not a terrible slogan: much to its credit, it directly addresses a central element of ethics in technology, which is that technical innovators are often faced with a dilemma about the dual uses of their technology and quite regularly have to make what amounts to a moral decision (I’m thinking here of the obvious examples: nuclear technology, some of the interesting consequences of 3-D printing, and even controversies over Google Glass, but surely there are others as well).
“Don’t be evil” seems to suggest that intentionality is the key to these ethical dilemmas: don’t intentionally create something for evil purposes. But there’s a problem with this approach: who’s definition of evil are we going with? There’s a sense in which such evilness should be blatantly obvious and apparent on first blush. But what about business choices which are cowardly, or deceptive, or unwise, but are not evil? Is there a responsibility as technologists to define business ethics more broadly than the affirmation of a negative: “not evil”? Is there a responsibility to do good?
These are some challenging questions – questions that seem to cut across a lot of the topics discussed on this blog. Last week, Daniel Song wrote a piece titled What Medicine Cannot Give. He points out that technology is fundamentally altering the way we do medicine, sometimes in ways that we haven’t really worked out. He’s asking essentially the same questions: what is the end purpose of technology? Is technological progress always good? What parts of technology are good?
The Jobs Ethic: “Be Simple. Be Perfect”
In contrast to Google’s “don’t be evil” is Apple, who have, implicitly, formed their own ethic (or rather, have inherited the legacy of an ethic shaped by Steve Job’s dominating presence). I highly recommend an essay by Andy Crouch which he wrote in 2011 about Steve Jobs and Apple – he addresses many of these same questions in the context of the rising phenomenon of the ubiquitous personal electronic device, and offers an extremely insightful look at the underlying ethics propelling many of the creative processes at Apple. Crouch makes the point that the Apple phenomenon has at its center a religious-like worship of technology as the Ultimate Answer (and perhaps Jobs as the medium by which the Ultimate Answer is revealed). That’s certainly a worldview!
But if Google’s ethic of “don’t be evil” fails because it lacks a central, positively defined anchor, the Jobs ethic fails because it centers around the wrong thing. Technology cannot save this world: its solutions are dazzlingly efficient, but not effective; its successes are proclaimed overwhelming, but not everlasting; its uses impart power, but not the ability to wield that power for good. An ethic centered around technology misses the key catalyst to real progress in this world: changed human hearts. This is something that only Christ can deliver on.
A Christian Technology Ethic Centered on Christ
Christian technologists have a mandate that goes far beyond the moralized “don’t be evil.” We’re instructed to “avoid every kind of evil”, but we’re also told to “hold fast what is good” and even more importantly to “test everything” (1 Thessalonians) by continually evaluating whether good fruit has been, or will be, produced.
Jacques Ellul makes an interesting point in his writings on technology: technology has a way of getting out of control. Increasingly, and especially since the industrial revolution, it rules us rather than us ruling it. The underlying truth of this is deeply ingrained in the human psyche: the concept of technology that has gone out of control permeates literature. Think of Babel. Think of Frankenstein. Think of Hal. (As a side note, I think hackers implicitly understand this idea as well, thus the thrill of excitement that happens when we do, at last, get technology to work as we will and do our bidding.)
As Christians, we have to find ways to ensure that technology remains secondary to the person, and that it does not take up the center of our belief system and crowd out Christ. The arc of history has made it clear that this won’t “just happen” on its own: it will take thought and the answering of difficult questions. It will mean admitting when technology is out of control and stepping on the brakes. And troublingly, it might mean turning away from or delaying promising technologies that go against what it means to be human, or what it means to thrive.
I want to be crystal clear that a Christian ethic for technology does not mean the rejection of technology as an inherent evil. It does not mean setting the default view of technology as something to be suspicious of. And it does not mean some sort of acceptance of neo-luddite thought.
Instead, it is a call for discernment. As much as the ancient world yearned for the philosopher kings, in this new millennium we need philosopher builders who, in their role as technology-creators, are anchored in a Christ-centered worldview and willing to take upon themselves the task of asking and answering these difficult questions.