By Praveen RP
We are in the early stages of transformative Artificial General Intelligence (AGI) technology, and the current guidelines are a work in progress. This requires a commitment to continuous learning and iteration in partnership with consortiums across the ecosystem to identify optimal and acceptable solutions.
Commitment to Fairness and Transparency
Companies must look beyond the economic benefits of AI and prioritize fairness and transparency. All organizations developing AI systems should establish their own Ethics Charters, translating high-level principles into practical guidelines. These guidelines should be easily understandable for employees, complete with examples to illustrate how to navigate ethical dilemmas. Specific actions should be outlined for each phase of an AI project—before, during, and after development.
Also Read US Election 2024: What’s the better choice for India? Details of the economic impact of Trump or Harris Presidency on India Israel’s Ban on UNRWA: Long-Term Implications for Stability and Diplomacy Only 27% of marketing leaders confident in operating models for growth; reveals Mckinsey report Why India’s key FTAs are under review
Addressing AI Risks
The skepticism surrounding AI is driven by several pertinent risks:
- Job Displacement:
- The Business Process Outsourcing (BPO) sector and customer service roles are increasingly affected by automation. According to the World Economic Forum, over 85 million jobs may be displaced by 2025 due to automation. As AI adoption increases, job roles will transform, requiring workers to adapt to new technologies. Leaders must proactively cross-skill their workforce for AI-driven changes, while individuals should focus on upskilling to mitigate the risk of job loss. Ultimately, it’s not about AI replacing humans but rather humans augmented by AI replacing those who do not adapt.
- Disinformation:
- The rise of deepfake technology poses significant risks, especially in the political arena. Research indicates that deepfakes can manipulate public opinion and interfere with electoral processes. To combat this, all AI-generated content should include labels or watermarks for traceability, ensuring accountability in media.
- Bias in AI:
- Bias often stems from flawed data sampling, resulting in over-representation or under-representation of certain groups. According to a study by MIT Media Lab, facial recognition systems exhibit 34% higher error rates for darker-skinned individuals compared to lighter-skinned individuals.
» Read More