Research

AI is here. How can real estate navigate the risks and stay ahead?

Continue reading the article online or download in PDF format

Artificial intelligence is no longer a distant possibility. It’s a key business strategy for the upcoming quarter.

As companies launch AI pilots, the associated risks are becoming increasingly tangible. Is your data vulnerable with AI models? Will there be liabilities if you use GenAI to analyze client documents? What if your AI-generated results are wrong? Concerns and uncertainties about the technology have held many companies back, with some even banning the use of GenAI tools altogether.

However, many companies are realizing that one of the biggest risks is falling behind. Real estate executives are increasingly aware that they can't maintain a wait-and-see approach much longer, as leading companies are already benefiting from connecting GenAI models to their portfolio databases to extract insights on property performance and inform strategies for portfolio improvement.

Is the technology still embryonic? Certainly. But the key is understanding the challenges and employing AI in a way that safeguards against any potential pitfalls.

This article details some of the key risks of adopting AI tools for real estate applications, diving into: 

  • What AI risks you need to be aware of in real estate use cases.
  • How the emerging regulatory environment will impact your use of AI.
  • How you can manage these risks through strategic, technical, and legal measures.
Navigating risks to stay ahead of the pack

Understanding your role in bringing AI to your business is crucial. So let's look at how some real estate companies are already using the technology in an array of applications.

Common ways of interacting with AI and GenAI systems include:

  • Using existing foundation models or tools such as GPT4 or Microsoft Copilot for internal tasks, e.g., summarizing a market research report.
  • Purchasing off-the-shelf AI-powered products/services from PropTech service providers, e.g., buying SaaS product for HVAC system control.
  • Partnering with an external AI experts to customize solutions to your specific business needs, e.g., creating a customized tool to optimize sustainability performance in a portfolio.
  • Training, and finetuning, models with proprietary data to provide service in a client-facing interface, e.g., building a client-facing chatbot that makes investment recommendations using historical transaction data.

In these use cases, real estate investors, developers, and corporate occupiers are generally categorized as "AI users/deployers," a term defined as "natural or legal persons that deploy an AI system in a professional capacity" by the European Union’s AI Act. This distinguishes them from AI developers and individual end-users.

AI developers focus on creating systems and ensuring the systems function correctly and responsibly, while AI users/deployers must navigate the practical, ethical, and regulatory implications of implementing and relying on these systems in their professional activities.

To learn more about the impact of AI on commercial real estate, explore our research

For companies as AI users/deployers, risks come in three categories:

Source: JLL Research, March 2024

While the potential for damage looks substantial, these risks can be effectively managed with well-crafted strategic, technical, and legal frameworks. The next sections focus on key considerations in mitigating these three types of risks for real estate investors, developers, and corporate occupiers.

Privacy, IP and data security require strong governance

Real estate technology adopters are already familiar with challenges from data security, privacy and IP. These challenges have been identified by over 1,000 senior decision-makers in real estate globally as the biggest challenge in their technology initiatives. AI introduces additional complexity to these issues, but it does not alter their nature.

Each use case has its nuances, but for AI users/deployers there is a set of critical questions to consider. They also apply to your broader tech initiatives.

  • How is the model trained? Where is it stored?
  • How is your data used? Is there any IP/trade secret involved? Can you opt out?
  • How secure is the tool? Can your data be encrypted? When using third-party services, is the provider compliant with regulations such as GDPR and the EU AI Act?

One scenario unique to GenAI and Large Language Models (LLMs) is when proprietary information, such as transaction history, is accidentally uploaded by employees into the public domain as part of the prompts, which could then be used in training later iterations of the model. Such a breach might also occur when fine-tuning foundational models with proprietary data.

To mitigate this type of data leak, companies could consider establishing a "sandbox" environment for deploying and fine-tuning foundation models, doubling down on data governance, setting up responsible data use policy and investing in extensive employee training.

Copyright challenges intensify when engaging with GenAI for content creation, particularly with images intended for public use. Not only is there a risk of your IP being infringed upon, but there's also the possibility of inadvertently infringing on others' IP. If the model has not been developed responsibly with licensed content, users of the model may also be held liable. While many model developers are claiming fair use, the legal landscape regarding use remains uncertain. Choosing the AI provider and establishing guidelines on where and how AI cannot be applied is critical in this case.

"Potential risks in leveraging AI for real estate aren't barricades, but rather steppingstones. With agility, quick adaptation, and partnership with trusted experts, we convert these risks into opportunities.” 

Yao Morin
Chief Technology Officer, JLLT

New regulations are changing the game

AI regulations are hitting a milestone in 2024. Following the U.S. Executive Order on AI at the end of October 2023 – the world's first AI law – the EU AI Act has recently been approved by the European Parliament. It’s expected to set a global benchmark similar to the GDPR. Concurrently, regulators and lawmakers in a number of other countries, including China, Canada and Australia, are actively advancing their own AI legislative efforts.

The EU AI Act classifies AI systems according to their societal risks into:

  • Unacceptable (e.g., social scoring and manipulative AI)
  • High-risk (e.g., biometrics, critical infrastructure, border control)
  • Limited risks (e.g., chatbots)
  • Minimal risk (e.g., video games, spam filters)

Different requirements are in place for different risk tiers, with the majority of obligations falling on AI providers and developers rather than users (although some risks remain with the users). These actions encompass enhancing data governance, disclosing technical documentation, providing instructions for use and complying with the EU’s Copyright Directive, among others.

For real estate AI users/deployers, this regulatory step is welcomed, promising increased transparency in data collection, model training, and use. Nonetheless, as these regulations take effect, it is imperative to evaluate compliance within the specific context of your use cases, making sure your AI providers build tools responsibly. Failure to do so could result in fines, liabilities, or even criminal penalties for your business.

In addition to overarching AI regulations, there are also regulations targeted at specific tools and use cases. For instance, Automated Valuation Models (AVMs) will be regulated in the U.S. and EU through legislation and appraisal standards in the near future.

At the same time, using AI systems irresponsibly could result in violation of real estate industry regulations such as fair housing laws and antitrust regulations. For example, RealPage was accused of anti-competitive practices through its pricing algorithms for rental housing. The tools designed to optimize pricing could inadvertently lead to illegal price-fixing or other anti-competitive behaviors. It is important to be aware of such risks and assess your application carefully.

Find out more about how AI can support your real estate needs

The right design is critical

Operational and business risks are twofold: first, there is the risk of ineffective applications resulting in cost overruns or diminished returns on investment; second, there is the possibility of inaccurate AI-generated outputs or misuse of such outputs, leading to flawed business decision-making, lowered quality of work, and compromised service to clients.

For real estate professionals, operational and business risks are the types of risks that demand your utmost attention. The key to mitigating these risks lies in thoroughly understanding how AI systems work and don't work, and then assessing how they could be most productively applied to carry out certain tasks or support specific workflows and build resilience around them with human agency. Most companies would need a trusted partner with extensive AI and real estate expertise to navigate this process together.

One of the most common causes of operational risks with GenAI models is the struggle with precision vs accuracy. LLMs can generate content with remarkable precision and exude high confidence in their outputs, yet these outputs aren't always accurate, a phenomenon known as “AI hallucination”. 

Mitigating operational risks of GenAI:

  1. Differentiate use cases between low risks (e.g., writing internal correspondence) and high risks (e.g., use cases involving highly sensitive data or non-capped client liability) and focus on experimenting with the low-risk use cases first.
  2. Don’t just take any foundation model and start asking it professional questions. Make sure the model is finetuned with data that is highly relevant and specific to your professional queries.
  3. Set up corporate-wide “Responsible Use Guidelines” and invest in upskilling and training your workforce to fully understand AI's capabilities and limitations, instructing them on the proper integration of AI tools into workflows, ethical use and providing clear guidance on prompt engineering.
  4. Vigilantly monitor the quality of the AI's output, instituting human review before utilizing the output in any capacity. Use it as part of your research process, not the end results. If the aim is to fully automate a project, then the risk of disinformation is going to be much higher than if a human agent was involved in this process.

If we draw an analogy between AI and a human assistant, it is your responsibility to find the right position for this assistant (choose pilot project wisely), ensure they receive the necessary professional training (work with a trusted partner and use relevant data to train the model), foster a collaborative network with other team members (implement human co-pilot and investment in your employees' AI literacy), and promptly address any errors they might make (monitor the quality of output constantly).

AI applications should never be driven by a trend; they ought always to be aligned with clear business objectives. In today's difficult operating environment, it is even more important to identify use cases where AI truly enhances problem-solving. Adhering to this principle is key to sidestepping the pitfalls of ineffective investments.

In conclusion, while the risks inherent in AI cannot be ignored, the strategic management of these complexities holds the promise of unlocking unprecedented productivity advancements for the real estate industry.

Want to learn more?

Get in touch with our research team to find out how we can support your real estate strategy with market insights and strategic advice.

Yuehan Wang

Associate – Technology Research, Global Insight

Charles Fisher

Director, Risk Analytics

Ram Srinivasan

Managing Director Future of Work Consulting

Michael Thompson

Senior Director, JLLT BI and Data Advisory