Skip to main content

For companies as AI users/deployers, risks come in three categories:

Privacy, IP and data security require strong governance

Real estate technology adopters are already familiar with challenges from data security, privacy and IP. These challenges have been identified by over 1,000 senior decision-makers in real estate globally as the biggest challenge in their technology initiatives. AI introduces additional complexity to these issues, but it does not alter their nature.

Each use case has its nuances, but for AI users/deployers there is a set of critical questions to consider. They also apply to your broader tech initiatives.

  • How is the model trained? Where is it stored?
  • How is your data used? Is there any IP/trade secret involved? Can you opt out?
  • How secure is the tool? Can your data be encrypted? When using third-party services, is the provider compliant with regulations such as GDPR and the EU AI Act?

One scenario unique to GenAI and Large Language Models (LLMs) is when proprietary information, such as transaction history, is accidentally uploaded by employees into the public domain as part of the prompts, which could then be used in training later iterations of the model. Such a breach might also occur when fine-tuning foundational models with proprietary data.

To mitigate this type of data leak, companies could consider establishing a "sandbox" environment for deploying and fine-tuning foundation models, doubling down on data governance, setting up responsible data use policy and investing in extensive employee training.

Copyright challenges intensify when engaging with GenAI for content creation, particularly with images intended for public use. Not only is there a risk of your IP being infringed upon, but there's also the possibility of inadvertently infringing on others' IP. If the model has not been developed responsibly with licensed content, users of the model may also be held liable. While many model developers are claiming fair use, the legal landscape regarding use remains uncertain. Choosing the AI provider and establishing guidelines on where and how AI cannot be applied is critical in this case.

The right design is critical

Operational and business risks are twofold: first, there is the risk of ineffective applications resulting in cost overruns or diminished returns on investment; second, there is the possibility of inaccurate AI-generated outputs or misuse of such outputs, leading to flawed business decision-making, lowered quality of work, and compromised service to clients.

For real estate professionals, operational and business risks are the types of risks that demand your utmost attention. The key to mitigating these risks lies in thoroughly understanding how AI systems work and don't work, and then assessing how they could be most productively applied to carry out certain tasks or support specific workflows and build resilience around them with human agency. Most companies would need a trusted partner with extensive AI and real estate expertise to navigate this process together.

One of the most common causes of operational risks with GenAI models is the struggle with precision vs accuracy. LLMs can generate content with remarkable precision and exude high confidence in their outputs, yet these outputs aren't always accurate, a phenomenon known as “AI hallucination”. 

Mitigating operational risks of GenAI:

  1. Differentiate use cases between low risks (e.g., writing internal correspondence) and high risks (e.g., use cases involving highly sensitive data or non-capped client liability) and focus on experimenting with the low-risk use cases first.
  2. Don’t just take any foundation model and start asking it professional questions. Make sure the model is finetuned with data that is highly relevant and specific to your professional queries.
  3. Set up corporate-wide “Responsible Use Guidelines” and invest in upskilling and training your workforce to fully understand AI's capabilities and limitations, instructing them on the proper integration of AI tools into workflows, ethical use and providing clear guidance on prompt engineering.
  4. Vigilantly monitor the quality of the AI's output, instituting human review before utilizing the output in any capacity. Use it as part of your research process, not the end results. If the aim is to fully automate a project, then the risk of disinformation is going to be much higher than if a human agent was involved in this process.

If we draw an analogy between AI and a human assistant, it is your responsibility to find the right position for this assistant (choose pilot project wisely), ensure they receive the necessary professional training (work with a trusted partner and use relevant data to train the model), foster a collaborative network with other team members (implement human co-pilot and investment in your employees' AI literacy), and promptly address any errors they might make (monitor the quality of output constantly).

AI applications should never be driven by a trend; they ought always to be aligned with clear business objectives. In today's difficult operating environment, it is even more important to identify use cases where AI truly enhances problem-solving. Adhering to this principle is key to sidestepping the pitfalls of ineffective investments.

In conclusion, while the risks inherent in AI cannot be ignored, the strategic management of these complexities holds the promise of unlocking unprecedented productivity advancements for the real estate industry.

Explore more of JLL's latest insights on World Economic Forum themes at our dedicated Davos page.