Bridging the AI divide between sciences and humanities
In spite of these distinctions, this complex yet exclusive circle creates a unique opportunity to investigate how divisions among people and nations can be bridged to the benefit of the harmonious coexistence of humanity.
In 2017, the Association of Pacific Rim Universities (APRU), a consortium of leading research universities located in nations around the Pacific Rim, focused on the diversity, particularly the digital divide, existing in the region.
Choosing the topic of new and emerging technologies, especially that of artificial intelligence (AI), as a principle area of interest, a project to be undertaken by a multinational interdisciplinary research team from member institutions was set up under the name "AI for everyone: benefitting from and building trust in the technology".
The team, comprising members specialising in the natural and social sciences and humanities from universities in Australia, Chile, China, Hong Kong, Japan, Mexico, Russia, Singapore and the United States, was unique and Google Asia Pacific kindly sponsored it. Keio University, located in Tokyo, Japan, acted as the coordinating academic institution.
Members of the research team came together to address issues such as the governance, benefits, accessibility, transparency and development of AI.
Although this technology has world-changing capabilities, there is apprehension about its adoption for several reasons:
- • Its opaque ‘black box’ nature;
- • Its potential to exacerbate economic and social divisions;
- • Its implications for privacy; and
- • Its risk of criminal misuse.
Over 14 months, the team built up a substantial body of knowledge on managing powerful technologies with the aim of informing how we educate future generations on responsible use of it. A set of working papers and a policy statement were generated for partial public dissemination.
Cultural and political divisions
The digital divide itself is ever evolving and can be interpreted in many ways; from technological and social divisions to cultural, political and academic ones. All play an important role. In fact, the most sensitive issues were found to be divisions along cultural and political lines. This is partly because AI can throw up questions on the very essence of human existence, leading to very strong arguments based on cultural and political factors.
If the ‘mind’ is the core of what it means to be human, how should machines that seemingly have ‘minds’ of their own be recognised? This seemingly simple yet profound question can have considerable implications in our lives if it is not meticulously scrutinised and exhaustively assessed.
And when this question was explored by the team, a diversity of viewpoints between societies with distinct cultures and histories were quickly encountered. It was also found that philosophical questions around ‘mind’, ‘self’ and ‘individuality’ likely inform attitudes towards state control of the data that drives AI.
While the traditional Western libertarian inclination is to protect individual freedom and privacy, more communitarian societies may be less resistant to collective control.
In such a divergent environment, the simplistic imposition of Western views on humanity and the free market not only create friction but deepen mutual distrust. Discussions must be based on an empathetic understanding of the philosophical contexts of communities for any agreement on the priorities in governing this technology to be made.
Technological design requires social expertise
Another serious divide is between technology and society, especially AI and society. We desperately need talent that can design technological and social systems simultaneously.
In today’s world, however, it is becoming more and more unrealistic for a single person to undertake both social and technological design. A new generation of talent that is capable of bridging this gap must therefore be fostered so that social systems (including governance structures) are developed in concert with scientific research.
This is a major departure from traditional linear models of development in science and technology and moves towards a concurrent design of technosocial systems.
The traditional model for how society benefits from science is sequential. Specifically, there is basic science followed by applied science, which in turn leads to experimental development, commercial development and eventually to social deployment, often in the form of commercialisation.
Only then do researchers of the social sciences and humanities identify potential issues, but such delays in introducing a social perspective result in greater economic cost.
For example, if the graveness of internet privacy issues had been realised in the early 1980s, there might have been far cheaper ways of resolving the problem. Now that technologies with inherent weaknesses have been fully deployed, however, privacy protection is consuming vast economic resources with less than satisfactory results.
As the impact of these technologies becomes greater and the speed of penetration accelerates in the globalised world, the cost of repairing these technologies escalates.
And with the introduction of AI and all its possibilities, it is imperative that we learn from past experiences and implement measures to ensure proper deployment of the technology. If not, what was intended for the betterment of society could turn out to be catastrophic. There is little room for indifference, ignorance, being uninformed or errors.
Distrust between different fields of research
The benefits of incorporating a social angle early on in the development phase is becoming ever more apparent, at least in theory. The reality, however, remains gloomy, as another divide is at play.
Science, technology, engineering and mathematics (STEM) researchers are often irked by their social science and humanities (SSH) counterparts for coming up with fictitious threats that only delay technological developments. SSH researchers also tend to present the threats in ways that are inaccessible for STEM researchers. Once such distrust emerges between these fields, communication becomes very difficult.
The concurrent design of technosocial systems must straddle both research and education. Keio University, being a social science-heavy university with strong science and engineering departments as well as a renowned school of medicine, has been slowly making progress in bridging these two realms.
Most recently, it has developed a university-wide research institute, the Keio University Global Research Institute (KGRI), where researchers across academic fields, some from overseas, gather to jointly conduct issue-based research. There are also collaborations with industry and governmental agencies.
Working together, concepts and operational constructs that are relevant to both STEM and SSH are being explored. In terms of AI, great emphasis is being placed on cybersecurity and cyber-civilisation research, part of which has contributed to the APRU project.
One of the goals of this research is to bridge existing academic divisions through the development of methodologies that allow those specialising in SSH to understand the potential and implications of technologies early on in the developmental stage and meaningfully convey these to STEM professionals.
However, from the APRU project, it was evident that to effectively and significantly bridge divides, efforts must be carried out through global collaboration. Without a doubt, research universities play critical roles in overcoming differences and creating effective ways of developing technologies for the good of humanity.
Those who share beliefs in evidence- and logic-based reasoning must come together and actively work towards understanding and collaborating across disciplinary divides.
Professor Jiro Kokuryo is vice president of Keio University, Japan, and professor in the faculty of policy management. He is senior international leader of the Association of Pacific Rim Universities (APRU).