Artificial intelligence (AI) is a captivating frontier that holds immense excitement and promise. It is an area where machines learn, adapt and innovate, reshaping industries and improving our daily lives. From autonomous vehicles to advanced medical diagnostics and natural language processing, AI opens new horizons of possibilities, tackles complex problems, and drives innovation at an unprecedented pace. As AI technology evolves, it has unlimited potential to revolutionize the way we work, communicate and interact with each other and the world around us.
But as we seek to unlock the potential of AI and other transformative technologies, we must ensure that we design and implement them in purposeful, inclusive, sustainable and responsible ways.
Making this vision a reality is not only a regulatory challenge, but also a human and cultural one. This means we will need new social infrastructure to help us manage the inherent tensions created by these powerful and already ubiquitous technologies.
Learning from the past
When we face other social transformations of similar scale and complexity, history has shown us that it is essential that we first understand the nature of the challenge we face.
Take the environmental movement for example. Our collective thinking about the true nature of our relationship with nature was heavily influenced by Rachel Carson, whose seminal 1962 book silent spring documented the complex impacts of humans on the environment. Before his book, we were arrogant about our ability to adapt nature to our needs. Attempts to address environmental issues were generally fragmented and narrowly focused on local impacts.
Carson helped the world understand the important challenge of reshaping and reconnecting our intricate relationship with nature. He warned us about the consequences of inaction for our planet. He made it clear that everyone—all institutions, sectors, and individuals—had the opportunity and responsibility to reframe our connections to the environment in terms that promoted its protection and health.
As we seek to shape our relationship with powerful new AI technologies, we need a similar awakening to the true nature of the challenge we face. We have already seen several risks to how AI will evolve over generations. While thoughtful regulation will be critical to creating protective barriers for AI, it is far from sufficient given the pace and pervasiveness of the new technology. To truly improve our understanding of and relationships with AI and other digital technologies at scale, and more proactively shape them as they shape us, we must take a new approach.
One such approach we’ve been developing for almost a decade is a mindset and practice we call technology management. Technology management, like the environmental management movement started by Carson’s book, involves understanding the true nature of technology and our relationship to it. We have also learned from the environmental movement that this understanding is not enough to confront the immense challenges before us. We must also get better at navigating underlying value tensions (individually and collectively) while empowering people to take action to help bend the arc of technology toward good.
As we shape our relationship with new #AI technologies, we need an awakening to the challenge we face, write Mark Abbott and Martin Ryan #MachineLearning #BigData #Automation #Innovation #Analytics #DataScience #DeepLearning
A new path forward
The essence of technology management lies in shaping research and dialogue that forges new connections between a wide range of people. It’s about actively seeking to understand different values and perspectives and, rather than falling into divisive debates, striving to identify mutually beneficial opportunities. Such dialogue is critical to shaping the kind of technological future that aligns with our collective aspirations.
One of the general tensions that is common in technology is between people who are naturally cautious and those who are hopeful. Both views have their merits, but unless we can expose the underlying layers of these feelings and openly deliberate about the associated values, it is difficult to move beyond those surface tensions. Technology management does exactly this.
These problems manifest themselves at every stage and level of the product development process. Whether we seek to create safe spaces for internal team dialogue about potential unintended consequences or engage seriously with our harshest external critics. Without the right tools and without the right permissions, we know what happens. Nothing.
The barriers to research are too high and too risky, so we sadly sigh and move on, missing the opportunity for dialogue and progress. The technology we hope to build needs that research and that dialogue happening everywhere. Not only because it’s the right thing to do, but because it also represents the next frontier for innovation. The challenge of better balancing our social, environmental and economic ambitions is the creative constraint we need to build everything better. Similarly, in education, technology management can help address gaps in early STEM education, where similar types of research are too often not conducted.
For example, imagine a young computer scientist working at a startup that leverages generative AI and social media data to help health authorities predict and pinpoint emerging mental health issues. Instead of simply moving forward with a mindset driven by techno-optimism, they approach development as stewards of the technology. First engage with a broad set of stakeholders, including health and social sector providers who could be affected by your model’s predictions. The community they build and the feedback they gather helps them better navigate the ethical tensions inherent in the technology and shape the product in ways they hadn’t originally imagined. Their challenges are not resolved, but they have developed the capacity they need to find their way to something better.
We continue to educate and anoint new technical professionals who have the ability to create powerful new technologies but lack the tools necessary to critique their role or engage in productive dialogue about the inherent conflicts between business and social outcomes. Technology management can support the development of professionals who not only have the skills to innovate but also can assume the responsibility that this entails.
Government officials have a unique opportunity to lead by example by adopting technology management practices in developing regulations for AI and other technologies. And they can design policies that invest in strengthening society’s capacity for the dialogue that technological management represents.
Recent events, such as that of US President Joe Biden executive order on AIwhich aims to establish new safety and security standards and the introduction by the Canadian government of a Voluntary code of conduct for the responsible development of AI., highlight the growing involvement of governments to address AI challenges. However, differing opinions, such as those expressed by Shopify founder Tobi Lütke, who opposes AI regulation, highlight the ongoing debate in the industry about the need for clear guidelines. These developments are encouraging and illustrate the ongoing tug-of-war that technology management seeks to transcend.
While the current wave of innovation in AI is exciting and efforts to create regulatory barriers are important, we are at a crucial moment that demands more than just a reaction; It requires a proactive stance on the powerful technologies we are rapidly creating at scale.
In some ways, this is perhaps radical thinking, but no less radical than the perspective shared by Carson when he helped enlighten us about the true nature and scope of the environmental challenge.
Our ultimate success will be measured not only by how we manage the challenges and opportunities within specific technologies and applications, but also by the capacity we develop to meet the many more opportunities and challenges that lie ahead. At MaRS Discovery District, we are fostering a community of innovators committed to technology management and helping diverse organizations with similar aspirations.
Now is the time to act wisely. We need to come together (as individuals, organizations and society as a whole) not only to drive innovation but also to actively direct it towards benefit for all.
Mark Abbott is the director of technology management for MaRS Discovery District.
Martin Ryan leads risk strategy at ServiceNow.