AI and Machine Learning Algorithms: Public Policy Framework 2032

Semester

Spring 2022

AI-powered technologies are being adopted at unprecedented rates across both the public and private sectors. However, the negative implications of biased AI algorithms have compelled governments to propose regulations to foster accountable AI. Against this backdrop, the Columbia SIPA Capstone team aimed to forecast AI regulatory frameworks for 2032.

The Capstone team foresees regulatory measures to be implemented across three verticals – data collection, data modeling, and auditing. They further believe that human oversight, adaptive learning, and biometrics data are likely to fall under the purview of future regulations. The team’s regional analysis indicates that future AI regulatory frameworks will be based on the principles of fairness, accountability, interpretability, explainability, transparency, and robustness. Although the regulatory landscape is prone to differ across regions, it is certain that governments will increasingly develop regulations to safeguard their citizens from the potential harmful use of AI-powered technologies. Among different applications of AI, the Capstone team predicts that computer vision and natural language processing are likely to fall under heavier regulatory scrutiny due to their increased risk of failing to sustain the aforementioned principles. This is highlighted in the healthcare and financial services use cases in their report.

Finally, the team believes that ethical concerns raised by these technologies will give impetus to privacy-enhancing technologies such as differential privacy, synthetic data, and homomorphic encryption. While the past decades were focused on growth, the upcoming decade is more likely to emphasize AI governance frameworks. The team, therefore, suggests corporations to take these regulatory landscape developments into account to meet future as well as current compliance requirements, whilst accelerating digital transformation across societies