Could your next boss be an AI robot?
The automation of administrative work is increasing and improving the quality, consistency and efficiency of monotonous and time-consuming activities. For example, reviewing 20,000 files can now be done in hours and days rather than weeks, or longer.
Many professionals welcome the ability to use AI to automatically assign work or highlight specific files and tasks that need to be prioritised.
They’re aware that automation and AI must be fair and unbiased, but also that humans must be cognitively aware of the limitations of AI. There is evidence that, at present, we are not.
Minding the limits of AI
In his recent book, “Artifictional Intelligence,” Harry Collins discusses the idea that AI will never understand human language enough to become truly intelligent.
The danger is that we rely on and trust artificial intelligence so much that we stop thinking for ourselves. We cease to apply critical thought and challenge the algorithm, because we have fooled ourselves into believing AI is sufficiently intelligent to be infallible.
This type of surrender is exemplified by stories of people who have nearly driven off a cliff or into a lake because they disregarded their own logic, switched off their brains, blindly followed instructions, and trusted that a machine knows more about the physical world than they do.
Automated processes still require people
The challenge, when automating a process, is to make sure that the automated tasks give the opportunity to human beings to use their judgement. Human decision-making must consider the limitations of the algorithm making the decision, the likely range of quality in the data inputs and the potential consequences of an error.
Escalation to a human for review or approval must be an integral part of any AI at present, and that human being must be trained and required to apply proper critique to the task before them. Humans are able to take into account information that even the smartest, most data-intensive AI running on the latest generation of supercomputers cannot—context, tacit knowledge, neologisms, sarcasm, motivations and common sense.
The balance between AI efficiency and human review
The difficulty is knowing when people should intervene without diminishing the efficiency benefits of automation. If every decision is reviewed manually, then is AI really that efficient? As well as this, there’s danger in believing in a foolproof algorithm - it’ll end up making all of us look silly at one point or another!
The truth is that our next boss, the entity that allocates us tasks, reviews our output, and shapes our daily working life, could well be an algorithm in the near future.
As we move toward wider adoption of AI, we must take steps to make sure that we do not over-automate and remove the human entirely from the process. Otherwise, we will have surrendered our critical thought to the robots—and that’s when the trouble really could begin!