The SNS Tylösand Summit 2019 addressed the vast AI opportunities for public policy and businesses, as well as the aspects of regulation and ethical usage of AI. AI is already an integral part of our everyday life. Our social media feeds are optimized using AI and media streaming sites use AI to suggest shows based on our viewing preferences. AI can also be used to measure poverty reduction and help allocate government resources. Even though these are current applications, the integration of AI throughout all functions of society is expected to surge in the coming years.
Susan Athey is the Economics of Technology Professor at Stanford Graduate School of Business. She was the first women to receive the John Bates Clark Medal, one of the most prestigious awards in economics. She has also worked as a consultant for Microsoft for over 10 years. In her talk Athey first focused on her work with regards to human centered Artificial Intelligence (AI). In her words the technology should be inspired by human intelligence and the development of technology must be guided by how it will impact humans. AI and its applications should enhance humans, not replace them.
Athey elaborated on how AI can optimize government resource allocation. With the intention to demystify the buzz surrounding AI, she gave examples of how AI has been applied in different contexts. As a concrete example Firecast was mentioned. This AI technology collects data on risk factors for buildings and predicts which buildings are most likely to catch fire. From this prediction it is possible to prioritize risky buildings for inspections. A very instructive example of AI aiding public policy decisions.
However, Athey also highlighted the manipulability of the technology. This was illustrated with another example, the measurement of poverty reduction using satellite imagery. Improvements in image recognition have made it possible for AI algorithms to interpret the satellite images and identify wealth in different neighborhoods. In developing countries AI can aid in measuring poverty reduction. One indication can be if a new roof is made of metal, instead of tarpaulin or branches. If government resources are allocated based on these observations, areas that has built metal roofs may experience a drop in the amount of resources they receive. If the allocation strategy is public knowledge it entails that there might be incentives to cover up a newly built metal roof. In other words, it is possible to manipulate the technology. Therefore, Athey highlighted the importance of considering which incentives change when AI technology is implemented.
Athey turned to the importance of recognizing what issues are suitable prediction problems and what issues are so called “what if” problems. To answer the “what if” problems one needs to consider the causality of the problem, as in “If I implement this policy how will my customers react?” She gave the example of companies trying to predict churn. Machine learning is great for predicting which customers are in the risk of cancelling a service of a company. But the problem is that the high-risk churn customers are not the ones with the greatest benefit for a potential sales call intervention, they are likely to cancel the service anyway. It illustrates how prediction does not provide the remedy.
In order to address “what if” questions Athey proposed that we need domain experts and creative data scientists in combination with prediction algorithms. There is a need for continuous experiments – not just prediction – to evaluate interventions and policy. She mentioned that such processes are already in place at large tech firms; Google would never make changes in an algorithm before it has been tested with randomized control trials. Athey’s talk focused on the opportunities of AI, but her speech ended with a cautionary note: all new possibilities with AI raise ethical issues that need to be addressed.
Avi Goldfarb is the Rotman Chair in Artificial Intelligence and Healthcare and Professor of Marketing at Rotman School of Management in Toronto. He is also Chief Data Scientist at the Creative Destruction Lab and co-author of the best-selling book Prediction Machines: The Simple Economics of Artificial Intelligence. In his speech Goldfarb introduced the “simple” economics of Artificial Intelligence (AI), where a key argument is that AI is lowering the cost of prediction. Like Econ101 – the introductory course in economics – suggests, when something becomes cheap, we simply use more of it. Goldfarb made parallels to previous tech innovations such as the introduction of computers. He explained that what the computer did was to make arithmetic cheap, and therefore we started using more of it. At first, the technology was applied to clear arithmetic problems, such as accounting. But as the technology became even cheaper it transformed what was viewed as an arithmetic problem. For example, photography, once a chemical process, transformed into a digital (number based) process.
Goldfarb drew the parallel to AI and suggested that at first AI will be used for what we typically view as traditional prediction problems. For banks, approving or denying a loan is an issue of predicting risk, and prediction is what AI does better, faster and cheaper than humans. However, as AI decreases the cost of prediction even further, we will observe less traditional applications of prediction technology. Self-driving vehicles was mentioned as an early example of a less obvious prediction problem. In addition to figuring out new applications for AI and identifying what is an appropriate prediction problem, the challenge for businesses will be to figure out what the complements to predictions will be. As Goldfarb put it: “what will be the cream and sugar of prediction?”. The value of the complements to prediction will increase.
Goldfarb turned to the question of why prediction is so valuable. His answer: because it is part of the decision-making process, it has little or no value on its own. It needs to be accompanied with judgment and not least action. The figure below depicts the role of prediction and judgement in process of carrying out a task.
Goldfarb ended by proposing a thought experiment. Imagine a volume dial, but instead of turning up the volume you turn up prediction accuracy of AI. He then asked how your particular industry would change as AI gradually improves prediction quality. As an example, he mentioned Amazon. Like Sears or IKEA, he pointed out that they are all essentially catalogue businesses where the customers browse for products they want and then put in an order. With just a slight turning of the dial, Amazon has implemented AI to suggest recommendations for shopping customers, and about 5 percent of the time a customer buy one of the suggested products. Goldfarb pointed out that this is an amazing number considering the vastness of the Amazon catalogue. But as prediction accuracy improves even more, Amazon might even change their business strategy, from the standard “shopping then shipping”-strategy to a “shipping then shopping”-strategy. It is only a matter of making the shopping prediction accurate enough. Goldfarb showed that already in 2013 Amazon applied for a patent for anticipatory shipping.
Virginia Dignum is Professor of Social and Ethical Artificial Intelligence at Umeå University. She was one of the first professors to be recruited to the Wallenberg AI, Autonomous Systems and Software Program (WASP). She has also been appointed by the EU commission as one of 52 experts to the High-Level Expert group on artificial Intelligence. In her talk Dignum focused around ethical considerations of AI posing several tough questions. Since AI can potentially do a lot of different things, we need to ask if it should do them. Naturally an array of questions followed such considerations. “Who should decide? Which values should be considered? Whose values?” Dignum stressed that these important questions need to accompany the implementation of AI.
According to Dignum responsibility must be considered in design, by design and for design(ers). She introduced the concept of ART – Accountability, Responsibility and Transparency – for the in-design processes. She also raised the question about teaching ethical values and behaviors of artificial autonomous systems.
Dignum emphasized that because AI is a tool, an artefact, created by humans, the technology itself can never be responsible. We are responsible. Therefore, it is essential to think about codes of conduct, auditing and tools to guarantee compliance with guidelines. The regulatory frameworks should be a steppingstone, not a block to innovation.
In a session focusing on the Swedish response to AI, Marcus Wallenberg, Chair of SEB, emphasized that in order to stay competitive Sweden needs to be in the forefront of AI. He mentioned that so far, he had not seen investments of the magnitude needed for Sweden to compete globally with these issues. He expressed worries that Sweden was about to fall behind with regards to this new technology. To remedy, it is essential that Sweden invests in specialist knowledge, focuses on skill development of existing workforce, joins forces other EU countries, links academia, business and government even more, and shows the courage to set ambitious long-term goals for AI.
The introductory talk by Marcus Wallenberg was followed by a panel where Ann Linde, Minister of Foreign Trade, Darja Isaksson, Director General at Sweden’s Innovation Agency Vinnova, Eva Nordmark, Chair at TCO, and Oskar Nordström Skans, Professor of Economics and Director of Uppsala Center for Labor Studies (UCLS), Uppsala University, offered their views on the topic of AI in the Swedish context.
Ann Linde emphasized that Sweden, as a small open economy, is used to being exposed to new challenges. But just as with previous technologies, not everyone will be a winner when AI is introduced more broadly. This will present new challenges for the labor market. It is therefore of outmost importance that Sweden invests further in education and skills development of the existing labor force. This is something Sweden is very good at from an international perspective, and it will keep Sweden competitive in the future.
Darja Isaksson highlighted that it is necessary for Sweden to take the lead when it comes to developing and implementing AI. An important first step is that organizations must dare to make data available. Sweden has unique high-quality data that entails a great competitive advantage when it comes to AI, but if not used, Sweden risks falling behind. She stressed the importance of regulations to be done in a way that prevents fragmentation and allows for new innovative approaches. She called for a sharpened collaboration, which she sees as essential for overcoming fragmented regulations that inhibit the use of big data, and encouraged actors to engage in the national initiative “AI Innovation of Sweden”.
Eva Nordmark also highlighted the future labor market adjustments. She emphasized that even though jobs persist, they are constantly changing. Structural transformation will therefore be central, and lifelong learning will become even more crucial for Sweden’s competitiveness.
Oskar Nordström Skans pointed out that AI is based on historical data. Therefore, AI is not suitable for predicting effects of paradigm shifts, such as the effects of the introduction of AI. He added that historically the labor market has been surprisingly stable despite large technological advances. What may be different about AI is that much of the automation takes place in the private sector, which has historically financed the public sector through taxes. A future challenge may therefore be how to design appropriate tax bases.