If artificial intelligence becomes the biggest technological development of the 2020s, then it seems logical that students studying technology will be the most in-demand graduates. That will merely continue the current trend. Indeed, the number of British university students studying mathematics has jumped over one-quarter in the last decade, while the number of students studying engineering and technology has jumped one-fifth.
Yet, the 2020s may see philosophy and languages become the choice degree for those wanting to work at the cutting edge of AI. The supply of these graduates is already limited. Those studying philosophy and history have dropped a tenth over the last decade, while language students have dropped a fifth.
The demand for graduates skilled in sophisticated decision making and cultural and language nuances will be driven by AI’s biggest problem.
Even the most basic machine learning has so far stumbled over moral problems. Consider some of the simple AI programmes already trialled. An Amazon AI resume screening tool learnt gender bias. The US Compass system exhibited significant racial bias when deciding which inmates to recommend for parole. Meanwhile, Microsoft’s chatbot Tay went bonkers in the 16 hours it was live.
So, if decision making for basic AI is so hard, the development of more sophisticated applications will require developers with serious moral and decision making skills. Take, for example, the autonomous car that is forced to choose between killing a child on the road or the occupants of the vehicle. Should it change its decision-making process once it crosses international lines? After all, studies show that some cultures will prioritise the child on the road, while others will prioritise the occupants of the vehicle.
As AI grows, the trickiest development issue will be how it answers impossible moral questions while accounting for different moral and cultural norms.
For AI chief executives, the stakes are high. When something inevitably goes wrong, they will be the ones justifying their product to Congressional investigators.
Already, some AI developers have had trouble pitching their services to firms that operate in highly-regulated environments, such as finance. The unpredictable nature of machine learning means it is not always easy for technology-trained developers to justify the conclusions of their systems. Thus, finance clients fear being questioned by regulators on something they cannot explain.
So the in-demand graduates of the 2020s may be those with philosophy and language degrees that are currently unfashionable. Of course, sociologists and anthropologists will be involved. But they study what the human condition as it is, rather than how it should be. This is what AI programmes will have to decide and it is why people with these skills will be paid well to answer the big questions about how AI should think and develop standards with regulators. Whether different countries can agree on those standards is another matter. But it is one that will only boost the demand for graduates with philosophical and language skills.
If artificial intelligence becomes the biggest technological development of the 2020s, then it seems logical that students studying technology will be the most in-demand graduates. That will merely continue the current trend. Indeed, the number of British university students studying mathematics has jumped over one-quarter in the last decade, while the number of students studying engineering and technology has jumped one-fifth.
Yet, the 2020s may see philosophy and languages become the choice degree for those wanting to work at the cutting edge of AI. The supply of these graduates is already limited. Those studying philosophy and history have dropped a tenth over the last decade, while language students have dropped a fifth.
Even the most basic machine learning has so far stumbled over moral problems. Consider some of the simple AI programmes already trialled. An Amazon AI resume screening tool learnt gender bias. The US Compass system exhibited significant racial bias when deciding which inmates to recommend for parole. Meanwhile, Microsoft’s chatbot Tay went bonkers in the 16 hours it was live.
So, if decision making for basic AI is so hard, the development of more sophisticated applications will require developers with serious moral and decision making skills. Take, for example, the autonomous car that is forced to choose between killing a child on the road or the occupants of the vehicle. Should it change its decision-making process once it crosses international lines? After all, studies show that some cultures will prioritise the child on the road, while others will prioritise the occupants of the vehicle.
For AI chief executives, the stakes are high. When something inevitably goes wrong, they will be the ones justifying their product to Congressional investigators.
Already, some AI developers have had trouble pitching their services to firms that operate in highly-regulated environments, such as finance. The unpredictable nature of machine learning means it is not always easy for technology-trained developers to justify the conclusions of their systems. Thus, finance clients fear being questioned by regulators on something they cannot explain.
So the in-demand graduates of the 2020s may be those with philosophy and language degrees that are currently unfashionable. Of course, sociologists and anthropologists will be involved. But they study what the human condition as it is, rather than how it should be. This is what AI programmes will have to decide and it is why people with these skills will be paid well to answer the big questions about how AI should think and develop standards with regulators. Whether different countries can agree on those standards is another matter. But it is one that will only boost the demand for graduates with philosophical and language skills.
How helpful was this article?
Click on the stars to send a rating