In a world with responsive, predictive artificial intelligence, operating behind the veneer of the world in which humans operate, a philosophical question arises:
From the Open A.I project to research being done at MIT, Google, and Facebook, the race is on to set the table for the technology of world of one hundred years from now.
As with all great advances in human development (and the development of artificial intelligence capabilities would rival going to the Moon) the applications of artificial intelligence at first will be bent towards satisfying our basest desires and human appetites and then move up the hierarchy of needs.
But a lot of this research and development is being done by scientists, developers, entrepreneurs, and others (technologists all) who—at least in their public pronouncements—seem to view people and our emotions, thoughts, feelings and tendencies toward irrationality and conflict, as a hindrance rather than as a partner.
Or, to put it in “computer speak”: In the brave new world of artificial intelligence research, humanity’s contributions–and decision making–is too often viewed as a bug, rather than as a feature.
However, design thinking demands that humans—and their messy irrational problems and conflicts—be placed at the center of such thinking rather than relegated to the boundaries and the edges. Even as humans create machines that can learn deeply, perform complex mathematics, created logical algorithms, and generate better solutions to complex future problems than the human who created the problems and conflicts in the first place.
Eventually, humans will create intelligence that will mimic our responses so closely that it will be hard to tell whether those responses are “live” or merely “Memorex.”
But until that day comes, mediators, arbitrators, litigators, social workers, therapists, psychologists, anthropologists, philosophers, poets, and writers, need to get into the research rooms, the think tanks and onto the boards of the foundations and the stages at the conferences, with the technologists to remind them that there is more to the future than mere mathematics.
Or else, the implications for the consequences of future conflicts (human vs. human and even machine vs. human) could be staggering.
-Peace Be With You All-
Jesan Sorrells, MA
Principal Conflict Engagement Consultant
Human Services Consulting and Training (HSCT)
Email HSCT: jsorrells@hsconsultingandtraining.com
Facebook: https://www.facebook.com/HSConsultingandTraining
Twitter: https://www.twitter.com/Sorrells79
LinkedIn: https://www.linkedin.com/in/jesansorrells/