The pervasive integration of Artificial Intelligence (AI) into various systems has sparked debates surrounding data protection. AI, now deeply embedded in numerous digital facets, serves the functions of user personalization, digital services, and data generation.
Exploring the intricacies of AI’s impact on data privacy, the Center for Digital Society (CfDS) at the UGM Faculty of Social and Political Sciences engaged in a comprehensive discussion with legal experts from the Ministry of Communication and Information Technology in the Digital Expert Talk #21 series on Tuesday (Dec. 5).
AI, fundamentally a data-driven command system, is a familiar phenomenon in the digital landscape. However, recent advancements in AI development have led to more substantial impacts and diverse disruptions.
Rindy, representing the Ministry of Communication and Information Technology, elucidated the three stages where AI utilizes personal data.
“Firstly, personal data aids AI in testing to enhance its intelligence. Subsequently, personal data becomes instrumental in decision-making, where users input their personal information. Lastly, there’s potential for disclosing personal data through AI outputs, such as ChatBot,” Rindy expounded.
Law Number 27 of 2022 regarding Personal Data Protection (PDP) serves as the legal framework governing the responsibilities of four key entities: individuals, corporations, public bodies, and international organizations.
These entities are mandated to safeguard consumer personal data, particularly concerning AI applications.
The law defines individual personal data as information identifying a person, directly or indirectly, through electronic or non-electronic systems. The obligation of these entities lies in ensuring that AI aligns with the stipulations outlined in the law.
“Identifying personal data is not straightforward. If I input a name, that’s personal data. If I input an address, that’s personal data. It depends on whether the inputted data or the single data inputted is data that can genuinely identify a specific individual,” added Rindy.
Furthermore, she emphasized that accountability for all AI decisions rests with the personal data controller, whether the owner, manager, or controller.
This entity plays a pivotal role in determining the purpose, necessities, and the AI system’s deployment of personal data. Consequently, the manager bears responsibility for all AI decisions and the system’s execution of commands.
Canada stands out as one of the pioneers in proactively regulating AI usage. Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) establishes regulations grounded in various principles to safeguard data in AI applications.
In addition to these regulations, Canada has instituted a specialized agency to provide counsel on the extensive development of AI.
Dr. Alfatika Aunuriella Dini, a UGM Faculty of Law lecturer, commended Canada’s progressive stance on AI development and regulation.
“UGM should be proud because we have our own AI Center, even though it was only launched earlier this year. Hopefully, it will be a significant stride for the future,” she expressed.
In the academic realm, challenges in responding to AI concerns frequently intersect with intellectual property issues. AI produces works comparable to those created by humans.
However, the results of AI work often involve the processing of recycled and published information.
“The prevailing challenges in academia related to intellectual property underscored that AI would never replace humans. Instead, those who fail to leverage or embrace AI are the ones at risk of lagging behind,” she emphasized.
Author: Tasya