Artificial intelligence (AI) and machine learning seem set to play a growing role in construction projects. At site level, it’s easy to imagine that logistics, site equipment and some parts of the assembly process will be handled by smart machines.
Brick-laying robots, smart diggers and autonomous trucks are already familiar in the construction media, if not the construction site.
Those machines are certainly impressive, but their contribution to the overall success of a project will probably be quite marginal in the near future. The real value of AI lies above that – at the project planning and management level, and its main function is to help humans to do a better job of understanding and running their projects.
This is the argument of Karthik Venkatasubramanian, Oracle’s vice president for data science, analytics and strategy. He says AI should be used to do what machines do well and humans not so much – which boils down to analysing, comparing or otherwise processing datasets. This is, he says, where the greatest returns are to be found, along with the significant cost savings on any given project.
“Fundamentally, what we’re trying to do is to calculate what the probability of delay is,” Venkatasubramanian says. “And if the probability of delay is x, we want to be able to say why – for example, because a subcontractor is going to be late or because the weather is going to be bad. And we want to say how the critical path will be affected, and what’s going to happen to the float, things like that. It’s bringing that together that I think is really exciting.”
Priorities
The biggest problem in AI is getting access to “training data” – for example, a large number of pictures of cracks in concrete, together with data that explains what kind of concrete the picture shows, and what kind of crack. The more pictures you have, the better the AI is at recognising what it’s looking at.
The problem is that all of this takes time and effort, and the construction firms that have access to cracked concrete don’t have any immediate return for supplying pictures of it.
So, the application of AI and machine learning to visual data is possible, but it will be a while before a machine can interpret things like cracks with the same skill as an experienced engineer.
So, Oracle is concentrating on where computers already outperform humans, and that is the ability to separate signal from noise in large amounts of non-visual data.
Venkatasubramanian says: “When you’ve got thousands of things happening all the time, how do you know which ones to focus on? Machines are really good at that; it’s a really easy problem for us to solve. We’d like to focus on the low hanging fruit. They’re the size of watermelons. What everyone is worried about is cost blowouts and schedule blowouts, and we have all the leading indicators to predict that in our data. It’s a no-brainer.”
Rather than using AI to interpret visual images, which requires vast amounts of training data and replicates a fairly common human skill, it is used to interpret items in a database – which is one of Oracle’s core skills.
For example, when a contractor plans a project using Oracle’s Primavera application, the software will be able to evaluate that schedule. “It will tell you a delay is likely because you’re using a subcontractor that has been involved in three delays in the past, or you’ve allowed two weeks for an activity that always takes you three weeks.”
This enforcement of common sense is certainly valuable, but it is hardly rocket science – or data science for that matter. But there is more to it than just that, because the evaluation can also bring in financial data from Oracle’s Textura application and change orders and requests for information (RFIs) from Aconex, as well as other datastreams, such as weather forecasts.
This allows a more sophisticated picture to be built up of a project-in-progress. The aim is to evaluate its evolving risk profile and to express it as a set of probabilities – an 80% risk of a five-day delay, a 70% risk of a 10-day delay, and so on.
Plans
In the future, Oracle wants to be able to offer applications that can dig deeper into the construction management process.
“At some stage,” Venkatasubramanian says, “we want to understand better the activity that we are talking about and figure out whether we can re-sequence it. If it’s going to be 45°C then workers can’t be on site, so we can suggest some work be done somewhere else – all that smart stuff involving complex use-cases. But I would argue that has less value at the moment because there’s so much low-hanging fruit.”
As general data about project outcomes and “lead indicators” such as RFIs is accumulated, so the AI’s algorithms become better trained at predicting delays in any individual project. Venkatasubramanian compares this to “bringing it all down to Lego blocks” then combining those blocks to suit a particular use-case.
Oracle may also add AI functionality to its existing suite of construction software. As with the scheduling example above, the idea is to make sure that we humans don’t make the obvious mistakes that come so naturally to us. For example, it is common for different people to be working with different versions of the same drawing, and although the software tracks the history of which document was sent to which person, it doesn’t warn them that not everyone has the latest version.
“At the moment the system will tell you the transmittal history, but the version number is left to people’s intelligence,” Venkatasubramanian says. “So, we’re flipping it around to say, here are five documents you need to worry about, because the latest one in the register has got nothing to do with what is being used on site. Your electrical contractor has one document that is at least 15 versions out of date. How are you supposed not to have design issues? These guys are going to build something that you’re going to have to knock over later on.”
People
This means that as well as training AI to predict the likelihood of project failure, it also has to consider the way that humans interact with the software. As Venkatasubramanian puts it, training an algorithm to do something with data models is the easy bit – getting humans to take the right actions after they are given information by the AI might be harder.
One issue is that humans may have access to information that the computer is missing.
Venkatasubramanian gives the example of a contractor in New Zealand who ignored information about a subcontractor’s poor performance history because the margins on the job were so low that they were the only company that could be afforded.
Another is that as AI models reach a certain level of sophistication, they become “black boxes” – so they can make a prediction with a high degree of accuracy, but neither the user nor the vendor knows why, which can persuade humans to disbelieve it.
Then there may be a question of how to set up the data model in a way that is most relevant to a particular user. One basic task of the AI is to track the progress of a project, but this can be assessed by a number of criteria – the percentage of the budget spent, the amount of materials consumed, or physical progress on site as assessed by aerial drone. Each of these may give a different percentage, and it’s not uncommon to have a 25% disparity between them. This means that the AI must somehow learn to tailor its algorithms to make its conclusions fit a particular project and a particular user.
The overall aim, Venkatasubramanian says, is to create an AI that is able to strike a balance between importance, relevance and call to action.
“It’s a bit of a Venn diagram,” he says, “and the gold is at the intersection of these three circles. You find a lot of vendors talking about their tech – and we do too, sometimes – but the value is not in the tech; the value is in the solutions and the problems they can solve.”