- What are the ethics of data mining, genetic screening and hydrofracking?
- What is the significance and future of neuroethics?
- Can there be ethical guidelines for the production and use of chimeras?
- Is there a right to technological connectivity?
Given revelations of internet data surveillance what concerns should be raised about the possibility of brain monitoring devices?
- The first is around marketing and the idea of “opting-out” rather than a mandatory “opt-in.”
As the customer (you and I) have gained more control over blocking being sold to, marketers and advertisers have had to come up with more clever (and blunt) ways to compel our valuable time and attention, with confusing and frustrating results for all parties involved.
Now imagine if marketers had access to the most intimate space on the planet: Your private brain space. There would be no “option to opt-out,” even though all the legalese would say that there would be.
Which gets us to point number 2…
- The second concern that we have is that increasingly, the desire to not participate in social communication is seen as a sign of social ineptitude at best and dangerous at worst.
In other words, the nature of the aberrant act itself is no longer enough to create outrage; the lack of social participation is the driver for primary outraged responses. This leads to concern number 3…
- The third concern is that we have long sought—as individuals, societies, and cultures—to control people under the guise of freeing them from Plato’s Cave.
Brain monitoring devices won’t be used to give us freedom, collaboration, and connection. Instead, they will be used to take away freedom, encourage and inflame false fracturing and individualization, and destroy connections between people.

When the astronaut Dave powers down the rebellious HAL 9000 computer in 2001: A Space Odyssey, and more recently in the 2013 film, Her, starring Joaquin Phoenix, we determine through pop culture, what machine “death” looks–and feels–like.
The fact of murder comes from the fact of life and ideas and philosophies that we have as individual humans–and collective societies–about what traits constitute life.
In the case of a machine, I take the position that a machine cannot overcome the limitations of its creator.
Life is defined, not only by self-sustaining processes (we were asked while writing this post if it would be murder to power down a machine created by another machine) but also by wisdom that is attained through life experience.
The crux of wisdom lies at the intersection of common sense, insight, and understanding.
HAL 9000 may have had one, or even two, of those things—such as insight and understanding—but “he” (see how we anthropomorphized an inanimate object there) lacked the third trait in spades: common sense.
Just like Skynet in Terminator or the machines and computer programming networks of The Matrix, HAL 9000 was unable to negotiate in good faith with his creator.
“He” made an “all or nothing” decision about Dave’s presence, Dave’s mission, and Dave’s motives and then took extreme action.
The same way that the machines did in The Matrix and Terminator.
The ability to negotiate with others in good faith, and to honor those agreements, is a human trait based on knowledge, experience, common sense, and insight, not just a happy byproduct of a conscious mind.
And until machines have the ability to negotiate with, not only their environments in the rudest sense of the term but also with their creators, we should feel free to power them up—or down—at our will.
After all, our Creator does the same thing.
Right?
Recent Comments