Biden Administration Seeks Public Input on Regulatory Framework for AI Models like ChatGPT

0
f574dee0-d476-11ed-9f6d-804bd1450041
Spread the love

American officials are seeking public input on regulations for AI systems like ChatGPT, with the National Telecommunications and Information Administration (NTIA) soliciting comments to ensure accountability from AI creators. These measures aim to assist the Biden administration in guaranteeing that these models function as intended “without causing harm,” according to the NTIA.

Areas for input include incentives for trustworthy AI, safety testing methods, and data access requirements for system evaluation. The NTIA is also considering whether different approaches may be necessary for specific fields, such as healthcare.

The deadline for comments on AI accountability is June 10th, as the NTIA views rulemaking as potentially crucial due to a “growing number of incidents” where AI has caused harm. By implementing rules, not only could the recurrence of such incidents be prevented, but the risks posed by potential threats could also be minimized.

Instances have already linked ChatGPT and similar generative AI models to data leaks and copyright infringements, raising concerns about automated disinformation and malware campaigns. Accuracy and bias issues are also at the forefront of worries. Although developers are addressing these challenges with more sophisticated systems, there have been calls from researchers and tech leaders for a six-month pause in AI development to enhance safety and address ethical concerns.

The Biden administration has yet to take a definitive stance on the risks associated with AI, with President Biden recently discussing the issue with advisors but indicating that it was premature to determine whether the technology poses a danger. With the NTIA’s actions, the government is moving closer to establishing a firm position on whether AI presents a significant problem.