Andrew Hanna | Frantz Ward
Artificial intelligence is no longer theoretical in construction. It is actively shaping how projects are designed, scheduled, and executed. Generative design tools, automated quantity takeoffs, and AI-driven safety monitoring are becoming part of everyday workflows. While the technology has advanced quickly, the contracts governing construction projects have not kept pace. For owners, contractors, and design professionals, this gap creates real and often overlooked risk.
Standard AIA contracts were not written with AI in mind. They are built on the assumption that licensed professionals are making decisions, exercising judgment, and producing the final work product. When AI tools are involved, that assumption becomes less clear, and so does the allocation of risk. The party relying upon AI to perform any of its obligations bears the risk if the AI produces inaccurate or incomplete information.
Where the Gaps Exist
1. Standard of Care
AIA agreements require design professionals to meet a professional standard of care. When AI-generated content is used, it is unclear whether reliance on that content satisfies those obligations or creates additional exposure. The contracts do not define the level of human review required, but any professional standard of care requires the professional to exercise independent judgment – regardless of any AI or other software tools.
2. Responsibility for Design Documents
AIA contracts place responsibility for drawings and specifications on the design professional. If errors originate from AI-generated content, liability still flows back to the human party. There is no distinction between human error and technology-driven issues.
3. Intellectual Property
Current provisions do not address AI-generated work. Questions remain about ownership, licensing, and potential infringement risks, especially when third-party AI platforms are involved.
4. Third-Party Technology Risk
AI tools can produce outputs that are difficult to verify and may change over time. AIA contracts do not clearly allocate responsibility between the project team and the software provider if those tools fail or produce flawed results, although the party that selected the faulty AI tool would likely be held responsible. If the decision to use the AI tool is joint, then the liability should be as well.
5. Insurance Alignment
Professional liability policies are structured around human error. It is not always clear whether coverage applies to AI-related issues, creating potential gaps between contractual liability and insurance protection.
Why This Matters
For owners, contractors, and design professionals, this creates a practical problem. AI is being used on projects, but the associated risks are often not clearly addressed, priced, or insured. Uncertainty in these areas can lead to disputes, unexpected liability, and gaps in coverage.
What to Do Now
AIA contracts remain a strong foundation, but they should be supplemented to reflect how projects are actually being delivered.
Project teams should consider addressing:
- Whether AI is being used and in what capacity
- Expectations for human oversight and verification
- Allocation of responsibility for AI-related errors
- Ownership and permitted use of AI-generated content
- Coordination with available insurance coverage
Bottom Line
As AI adoption continues to grow, these issues will become more common. Addressing them early can help align expectations and reduce risk.
If AI is part of your project, even indirectly, it is worth confirming that your contract language reflects that reality. A targeted review of your construction documents and agreements can help ensure that risk is properly allocated before issues arise.
When one of your cases is in need of a construction expert, estimates, insurance appraisal or umpire services in defect or insurance disputes – please call Advise & Consult, Inc. at 801.641.8304, or email experts@adviseandconsult.net.
