Artificial Intelligence Is Moving Faster Than Regulation
Artificial intelligence (AI) technologies are increasingly embedded in business operations across Indonesia, including financial services, digital commerce, logistics, and human resource management. Organizations are deploying automated systems to screen applicants, assess creditworthiness, detect fraud, and generate operational decisions at scale.
While these technologies offer significant efficiency and innovation benefits, their adoption has also introduced a growing category of legal risk. When artificial intelligence systems produce incorrect, biased, or harmful outcomes, the consequences may include financial loss, contractual disputes, regulatory enforcement, and reputational damage.
Indonesia has not yet enacted a comprehensive statute specifically regulating artificial intelligence. However, this regulatory gap should not be interpreted as an absence of legal responsibility. Existing legal frameworks administered by authorities such as Financial Services Authority (OJK) and the Ministry of Communication and Digital Affairs of the Republic of Indonesia (KEMKOMDIGI) already establish obligations for businesses operating digital systems and processing data.
As organizations increasingly rely on automated technologies to make decisions, the question of legal responsibility is becoming more immediate and commercially significant.
Automated Decision-Making: When Systems Decide Without Context
A defining feature of artificial intelligence systems is their reliance on automated decision-making processes. These systems generate outputs based on patterns identified in data, using predefined models and algorithmic logic.
However, automated systems do not inherently understand context, intent, fairness, or changing circumstances. They make decisions based on the data available to them, rather than the broader situation in which those decisions are applied.
In practice, this limitation can create legal risk. For example:
- an automated credit assessment system may reject an application based solely on historical data patterns;
- a fraud detection algorithm may block legitimate transactions;
- a recruitment tool may unintentionally produce discriminatory outcomes.
Where harm results from such decisions, Indonesian law generally places responsibility on the organization deploying the system. The fact that a decision was generated by artificial intelligence does not remove the duty to exercise oversight and ensure compliance with applicable legal standards.
As reliance on automation increases, businesses may face growing exposure if governance mechanisms are not designed to monitor and review automated outputs.
Existing Indonesian Law Already Creates Liability Exposure
Even in the absence of dedicated artificial intelligence legislation, several existing laws in Indonesia provide a clear basis for liability when automated systems cause harm.
Relevant legal frameworks include:
- Indonesian Civil Code, which establishes liability for unlawful acts causing loss or damage;
- Law No. 11 of 2008 on Electronic Information and Transactions, as amended, which regulates electronic systems and digital platforms;
- Law No. 27 of 2022 on Personal Data Protection, which governs the processing and protection of personal data;
- Law No. 8 of 1999 on Consumer Protection, which requires businesses to ensure the safety and reliability of products and services.
These laws collectively establish a consistent principle:
Technology does not shift responsibility away from the business operating it.
Organizations deploying artificial intelligence systems remain responsible for ensuring that their operations do not create avoidable risk for customers, employees, or business partners.
The Expanding Scope of Responsibility in AI Deployment
One of the practical challenges in artificial intelligence incidents is determining who bears responsibility when harm occurs. Liability may extend across multiple parties involved in the design, implementation, and operation of the system.
Potentially responsible stakeholders may include:
- Developers, where system design or coding errors create operational risk;
- Technology vendors, where software or infrastructure fails to perform as expected;
- Business operators, where systems are deployed without adequate oversight or risk management controls;
- Service providers, where contractual obligations relating to system reliability or data protection are not fulfilled.
In many cases, responsibility is determined by assessing control over the system and the ability to prevent harm. Organizations that deploy artificial intelligence technologies therefore remain accountable for ensuring that appropriate safeguards are implemented and maintained.
Regulatory Attention Is Increasing and Businesses Should Prepare
Regulators in Indonesia and globally are placing increasing emphasis on the governance of automated technologies and digital decision-making systems. This trend reflects growing awareness of the potential risks associated with artificial intelligence, particularly in sectors involving consumer services and financial transactions.
As regulatory expectations evolve, organizations that rely heavily on automation may face increased scrutiny regarding:
- transparency in automated decision-making;
- accountability for system outcomes;
- protection of consumer and personal data;
- reliability and security of digital systems.
From a risk management perspective, the issue is no longer whether artificial intelligence will be subject to greater regulation, but how quickly enforcement expectations will develop.
Conclusion
Artificial intelligence is rapidly transforming the way organizations operate, but the legal consequences of automated decision-making are becoming more visible. Businesses that deploy AI systems without clear governance frameworks may face significant legal exposure as regulatory expectations continue to evolve.
In Part 2 of this series, we examine additional legal risks arising from artificial intelligence deployment, including intellectual property ownership, cross-border data governance, and practical compliance considerations for organizations operating in Indonesia.
If you, a prospective client, have further inquiries about the topic discussed above, Schinder Law Firm is one of many corporate law firms in Indonesia that has handled numerous similar matters, with many experienced and professional corporate and civil lawyers in its arsenal, making it one of the top consulting firms in Indonesia. Feel free to contact us at info@schinderlawfirm.com for further consultation.
Author:
Budhi Satya Makmur