If 2018 brought artificial intelligence systems into our homes, 2019 will be the year we think about their place in our lives. Next year, AIs will take on an even greater role: predicting how our climate will change, advising us on our health and controlling our spending. Conversational assistants will do more than ever at our command, but only if businesses and nation states become more transparent about their use. Until now, AIs have remained black boxes. In 2019, they will start to open up.
The coming year is also going to be the year that changes the way we talk about AI. From wide-eyed techno-lust or trembling anticipation of Roko's basilisk, by the end of next year, wild speculation about the future of AI will be replaced by hard decisions about ethics and democracy; 2019 will be the year that AI grows up.
Bots, troll farms, and fake news
At least 18 countries have seen their election results affected by fake news. On Facebook alone, an estimated 150 million people were targeted with inflammatory political ads.
āThere are biased and inaccurate news sources virtually anywhere there are people,ā says Preslav Nakov, research scientist at the Qatar Computing Research Institute, who has studied the impact of fake news on elections. āStudies have shown that 70 per cent of users cannot distinguish real news from fake news,ā he adds. His team found that fake news stories spread six times faster on social media than real ones.
Nakov and colleagues at MIT are developing a system that will learn whether a news source is peddling propaganda or not. āFighting misinformation isnāt easy; malicious actors constantly change their strategies. Yet, when they share news on social media, they typically post a link to a website. This is what we are exploiting: we try to characterise the outlet where the article is hosted.ā
But machine learning canāt tackle the problem alone. āThe most important element of the fight against disinformation is raising user awareness, as propaganda becomes less effective once weāre aware of it. It would also help limit the spread of disinformation as users would be less likely to share it,ā says Nakov.
Read more: In 2019, despite everything, the UK's AI strategy will bear fruit
Artificial intelligence meets civil society
Far from being a niche subject, big data and deep learning are affecting us all. āIām interested to see if the difficult year Facebook has had leads to any kind of cultural change or initiative ā I think Facebookās problems could be the early signs of a gradual change in the way we think about technology, and I hope that continues,ā says Jamie Susskind, author of Future Politics and former fellow of Harvard University's Berkman Centre for Internet and Society.
āThe digital is political,ā Susskind says. āWe must be helped to understand how the technologies that govern our lives actually work, the values they encode, and what purpose they serve. More radically, it means that we should have a hand in shaping or customising them so we donāt have to rely on the morality or wisdom of tech firms alone.ā
The rise of the digital ethicist
Initiatives like Oxford Universityās Future of Humanity Institute and DeepMindās Ethics and Society project are bringing together specialists in technology and the humanities to try to foresee, and mitigate, the social costs of AI ā as well as steering research and investment towards projects that benefit society.
In 2018, the Nuffield Foundation launched Ada Lovelace Institute, a charitable trust, educating a new generation of digital ethicists with a mission to foster research and inform debate. In the year ahead and beyond, expect more and more AI companies to hire professional ethicists into senior roles.
āBy this time next year, we want Ada to be recognised as offering a trusted and informed contribution to complex questions,ā says Tim Gardam, chief executive of the Nuffield Foundation. The aim, he continues, is to identify issues that need addressing collectively to ensure a data-driven society is socially inclusive.
The real laws of robotics
Suffice to say 2018 hasnāt been a great year for Facebook. From boycotts to governmental grillings, thereās been a lot of theatre but not much action. Slowly, that will change. ā2019 is a bit soon for many hard laws to be enacted, but lots of non-binding standards and guidelines are already being released,ā says Jacob Turner, international lawyer and author of Robot Rules: Regulating Artificial Intelligence. āIt is often the case in regulation that soft laws of this kind are a precursor to binding ones.ā
In the coming year, legislators should focus on getting clearer laws on the statute books. āIt would be much better to create formal laws for AI rather than leaving things to judges,ā says Turner. āJudges donāt have an opportunity to consult the wider public, or to do long-term studies into the impact of their decisions.ā
So far, discussion on the future of technology is rife with nice-sounding but meaningless sine dicendos. According to Turner, this level of rhetoric is getting us nowhere. In 2019, āgovernments as well as private industry should stop trying to come up with short statements of vague, high-level principles like, āAI should be used for goodā and make a start on the more difficult task of working out more detailed rules and regulations,ā he says.
Conversational customer service takes off
Last year we started to talk to our machines. Now itās finally time for them to talk back without spouting utter gibberish of basic weather updates.
Google recently launched Duplex, a service that can, if flashy technical demos are to be believed, call restaurants and make reservations on your behalf. This technology will soon come to your banking app, your calendar, and your email, with smarter natural language generation. According to Robin Kearon, senior vice president at Kore.ai, systems that initiate conversations with you are about to become the ānew normalā.
Weāve been promised this before, but now the technology is almost ready. While there may be some useability (and regulatory) wrinkles to sort out, the technologies of natural language understanding and generation are, finally, good to go.
The biggest challenge that remains to be solved is that of making machines more socially capable. āAs deeply social animals, humans tend to treat AI and robots as social as well,ā says professor Bertram Malle of Brown University. And that can lead to disappointments. āReal human conversation is too complex for current systems, and there is no social intelligence that would know when it might be appropriate [to say something].ā
Think of the difference in how you type a query into Google and how you might ask that same question to a human. You might type ābars near meā into Google. Say this to a human and youāll look like an idiot. In moving our communications with machines from the typed word to the spoken word, one of the biggest remaining challenges is learning a new way of speaking.
Silicon Valley cops
Smart detection of fraud and money laundering arenāt new, but the trend is increasingly away from automation and towards augmented intelligence.
Take transaction fraud. The technology is great at running thousands of concurrent experiments to predict the likelihood of any specific order being fraudulent, but the real results come when experienced (human) analysts collaborate with machines.
Spotting suspicious behaviour also requires that fraud detection systems and analysts look in new places. āFinancial criminals rarely operate as singletons,ā says David Nicholson of BAE Systems Applied Intelligence. āTheir signal is an abnormal network of connections between individuals, accounts, email addresses, residences, and so on.ā In 2019, crime fighting AI will shift to spotting criminals, based on human networks evolving over time, rather than trying to spot one-off crimes.
The dawn of machine explanations
Systems that make predictions but canāt explain them are risky in several ways: a decision is based on some discriminatory feature like race or gender can be damaging for society, while a decision that is influenced by an easily-faked bit of data is extremely brittle. Marco Tulio Ribeiro of Microsoft Research recently Lime (local interpretable model-agnostic explanations), or, in simple terms, a software system that helps makes sense of decisions made by algorithms. āExplanations help developers and users assess whether or not they should trust the model before deploying it, and also help pointing out areas for improvement,ā he says.
āExplanations provide insights into a model's vulnerability,ā adds professor Leman Akoglu, who researches explainable AI at Carnegie Mellon University. āIf youāre trying to identify terrorist suspects for example, and an explanation shows that the underlying model is relying on an individual's age then the model might be vulnerable, since it is easy to lie about one's age.ā
Akoglu and Ribeiro see explainable AI as a new tool in its own right, helping people and machines work together. āI always hesitate to make predictions about the future, but I am very interested to see how the area of human-AI collaboration develops,ā says Ribeiro. āThere are many areas where the human-AI team is potentially more effective than either one taken separately.ā
Climate change
āPhysics-driven climate simulation models have generated more [data] than all satellite measurements of Earth's weather,ā says professor Claire Monteleoni from the department of computer science at University of Colorado Boulder, who is using smart simulations to help predict and mitigate extreme climate events. āThese data-driven technologies are actually the most cost-effective way to unlock insights from the massive amounts of both simulated and observed data that have already been collected.ā
Monteleoni runs hackathons to encourage young people to enter the field of climate data science. āIām thrilled to see a generation of data scientists and AI researchers, especially at the student and early-career levels, take interest in climate informatics. [In 2018] we overflowed the meeting room, and unfortunately were unable to admit everyone on the waiting list,ā she says.
And as for her resolutions for 2019? āAs researchers and educators in the fields of AI and machine learning, we should strive to expose students to diverse application areas that address major challenges, not only in the field of climate, but also in other areas of societal benefit, such as sustainability, agriculture, health, education, fairness, diversity, and inclusion.ā
Learning the lay of the land
As populations urbanise, and food and water security continue to be major concerns, machine learning will help us make the most of the land we have. Take farming.
āThe Farm Census in England is now only conducted every ten years providing data which is then clumped together into 2 km squares,ā says professor Ian Bateman, director of the Land, Environment, Economics and Policy Institute, which is using AI to enhance land surveys. āSatellites provide a complete map of the country in less than a week, every week, and in tremendous detail. Machine learning techniques can turn that tsunami of data into clearly interpretable information, discovering messages that would take researchers an unfeasibly long time to discover,ā he adds.
āOur hope is that by the end of 2019 we will have used machine learning techniques to bring earth observation data into our understanding of how land use can be changed in ways which allow policymakers to make decisions which are good for farmers, good for society and good for the environment.ā
A healthier AI
The application of machine learning to medicine is helping to diagnose illnesses earlier, unlocking promising new avenues for treatment, or helping ensure that patients take the medication theyāve been prescribed.
Since 2013, weāve known that medical professionals risk burnout when faced with the huge data wrangling tasks that modern medicine demands of them. In a study of nearly 500 hours of clinical time in a busy emergency department, 43 per cent of of time was spent on data entry, compared to only 28 per cent with patients. In a single ten-hour shift, a doctor could expect to make 4,000 mouse clicks.
The problem is so acute that one Stanford academic published a paper this year calling for a change. In 2019 and beyond, doctors will no longer be expected to feed machines data in onerous, specific ways. Instead, the grunt work will be shifted to the machines, who will have the intelligence to interpret looser, less formatted data. Rather than form-filling and box-ticking, conversation-driven data input will become a reality, allowing virtual assistant to automatically extract important information from conversations between medics and patients.
Updated 03/01/2019: Tim Gardam is the chief executive of the Nuffield Foundation
This article was originally published by WIRED UK
