Deploying AI: Navigating Innovation, Regulation, and Responsibility

Artificial Intelligence (AI) is no longer a futuristic concept—it is a present-day reality reshaping industries across the globe. At AIMX 2024, a panel of leaders from diverse sectors including technology, logistics, healthcare, and finance convened to discuss the multifaceted journey of deploying AI. Their insights revealed a complex landscape where innovation, regulation, and corporate responsibility must be carefully balanced.
Alternatively, catch the AIMX Podcast episode of the session here.
The Regulatory Landscape: Hard Law vs. Soft Law
One of the central themes was the contrast between regulatory approaches. The European Union’s AI Act represents a “hard law” model, imposing strict compliance requirements and steep penalties for violations—up to €35 million or a percentage of company profits. This approach is designed to protect human rights and values, particularly in high-risk AI systems such as social scoring.
In contrast, Singapore and other Asian countries favor “soft law,” emphasizing innovation and flexibility. Singapore’s Infocomm Media Development Authority (IMDA) promotes AI through frameworks like AI Verify, which encourages community-driven risk management and governance. Korea, meanwhile, integrates AI oversight into existing laws, such as criminalizing the possession of deepfakes.
Despite differing methods, the outcome is often similar: safeguarding users while enabling innovation. The choice between hard and soft law depends on regional governance structures and cultural attitudes toward regulation.
Corporate AI Policies: Ethics and Governance
Many companies are still in the early stages of developing AI policies. While some integrate AI into existing IT or digital transformation frameworks, others are establishing dedicated AI ethics policies. These policies often include governance structures such as ethics advisory panels, steering committees, and working groups to assess AI use cases for risks related to data privacy, security, and transparency.
Internal AI platforms are becoming common, allowing employees to experiment with generative AI in secure environments. These platforms often block external internet access to prevent data leakage and ensure compliance with corporate standards.
AI in Practice: Sector-Specific Adoption
Financial Services
In banking, AI is transforming customer experience, productivity, and risk management. Chatbots, virtual assistants, and predictive models are used for fraud detection, credit scoring, and anti-money laundering. AI also supports hyper-personalized financial advice and embedded finance solutions. Regulatory compliance remains critical, with a strong emphasis on model explainability and data governance.
Healthcare
Healthcare is cautiously embracing AI, particularly in diagnostic imaging and patient monitoring. AI helps reduce scan times, improve accuracy, and support overburdened medical staff. However, generative AI is largely absent due to strict regulatory requirements. Instead, pre-approved predictive models are used to ensure patient safety and trust.
Logistics and Transportation
AI is streamlining operations in logistics, especially in customs clearance and customer service. Internal AI hubs enable employees to automate workflows and improve efficiency. For example, warehouse staff have developed apps to report packaging issues, demonstrating grassroots innovation. AI adoption is guided by ethics policies and supported by legal and IT teams.
Workforce Transformation: Reskilling and Upskilling
The rise of AI has sparked concerns about job displacement. Companies are responding with proactive reskilling initiatives. Training programs introduce employees to tools like Power BI and Power Automate, empowering them to automate tasks and improve productivity. These efforts foster a culture of continuous learning and reduce fear around technology.
In healthcare, the shortage of skilled professionals is driving AI adoption. With fewer nurses and radiographers entering the workforce, AI is seen as a necessary support system rather than a threat.
Looking Ahead: The Brussels Effect and Global Standards
The “Brussels Effect”—where EU regulations influence global standards—was highlighted as a key consideration for startups and multinational companies. While Southeast Asia currently enjoys regulatory flexibility, EU standards may become benchmarks in future tenders and partnerships. Companies are advised to prepare early by aligning with emerging global frameworks.
Final Thoughts
The deployment of AI is a journey that requires thoughtful navigation. Companies must balance innovation with regulation, embrace ethical governance, and invest in workforce transformation. As AI becomes ubiquitous, the focus is shifting from “why” to “how”—how to deploy AI responsibly, effectively, and inclusively.
Organised by

Powered by

