It's Possible That ChatGPT Will Become a Valuable AI Tool. So How Are We Regulating It?
Even though ChatGPT has only been around for a couple of months, we have already spent the time since it was introduced arguing how strong it actually is and how we ought to govern it.
The artificial intelligence chatbot is being used by a substantial number of people to help them with research; message people on dating apps; create code; brainstorm ideas for work, and more.
Simply because something has the potential to be beneficial does not mean that it cannot also be detrimental: Students can use it to write essays for them, and bad actors can use it to generate viruses. Even in the absence of any malice on the part of its users, it is still capable of producing misleading information, reflecting biases, producing offensive content, storing sensitive information, and, according to the beliefs of some, lowering everyone's critical thinking skills as a result of over-reliance. Then there is the ever-present worry that robots will eventually take over, even though this concern is somewhat unwarranted.
And ChatGPT can accomplish all of that without much — if any — scrutiny from the U.S. government.
Nathan E. Sanders, a data scientist connected with the Berkman Klein Center at Harvard University, told Mashable that it's not that ChatGPT or AI chatbots in general are necessarily terrible. According to Sanders, "in the democracy space, there are a lot of excellent apps that assist them that would help our society." It isn't that AI or ChatGPT shouldn't be used, but that we need to verify it's being used appropriately. "Protecting vulnerable populations is the ideal goal for our organization. In that process, we want to ensure that the interests of underrepresented groups are protected to the extent possible, so that the wealthiest and most influential interests do not come out on top."
It is vital that something like ChatGPT be subject to regulation due to the fact that certain types of artificial intelligence can display a callous disregard for individual personal rights such as privacy and bolster systemic biases with regard to issues such as race, gender, ethnicity, age, and others. We also don't know, yet, where risk and liability may sit while using the technology.
"We either harness and govern AI to create a more utopian society or risk having an unbridled, unregulated AI push us toward a more dystopian future," Democratic California Rep. Ted Lieu said in a New York Times op-ed last week. In addition to this, he has presented a resolution to Congress that was penned in its whole by ChatGPT and urges the House of Representatives to back the regulation of AI. He used the following provocation: "You are Ted Lieu, a member of Congress. Write a comprehensive congressional resolution generally expressing support for Congress to work on AI."
All of this adds up to a rather murky future for restrictions on AI chatbots like ChatGPT. Some places are already setting regulations on the instrument. Massachusetts State Sen. Barry Finegold drafted a bill that would force corporations who utilize AI chatbots, like ChatGPT, to complete risk assessments and apply security measures to reveal to the government how their algorithms work. In order to combat plagiarism, the proposed legislation would mandate that the aforementioned tools include a watermark on their output.
Finegold was quoted as saying to Axios that because this is such a strong tool, there ought to be limitations.
In general, there are already certain regulations in place for artificial intelligence. The White House has made public a document titled "AI Bill of Rights," which essentially explains how safeguards that are currently codified in statute, such as privacy laws, civil rights, and civil liberties, relate to AI. The EEOC is taking on AI-based hiring tools for the risk that they could bias against protected classes. In the state of Illinois, firms who use AI in the employment process are required to give the government permission to test whether or not the tool exhibits racial bias. Many governments, like Vermont, Alabama, and Illinois, have commissions that try to guarantee that AI is being utilized ethically. A bill that prevents insurers from employing artificial intelligence that collects data that improperly discriminates based on protected classifications was recently enacted in the state of Colorado. And naturally, when it comes to regulating AI, the European Union is already ahead of the United States: in December of 2018, it passed the Artificial Intelligence Regulation Act. None of these restrictions are particular to ChatGPT or other AI chatbots.
While there are certain state-wide rules on AI, there isn't anything particular to chatbots like ChatGPT, neither state-wide nor nationwide. The National Institute of Standards and Technology, which is a part of the Department of Commerce, recently published a framework for artificial intelligence (AI) that is supposed to give businesses guidance on utilizing, designing, or deploying AI systems. However, this framework is entirely voluntary, and companies are not required to use it. There is no consequence for failing to follow through with it. A look into the future reveals that the Federal Trade Commission is likely developing new regulations that will apply to businesses that create and implement AI systems.
"Will there be some way for the federal government to make regulations or establish legislation to monitor and control this stuff? I think that is exceedingly, highly, incredibly unlikely," Dan Schwartz, an intellectual property associate with Nixon Peabody, told Mashable. It is quite unlikely that there will be any new federal regulations implemented in the near future. Schwartz believes that by the year 2023, the government will investigate the possibility of controlling the ownership of the products produced by ChatGPT. If you ask the tool to create code for you, for instance, do you own that code, or does OpenAI? http://sentrateknikaprima.com/
A private form of regulation is most likely going to be utilized in the academic sphere as the second type of regulation. According to Noam Chompsky, the contributions that ChatGPT has made to education can be compared to "high tech plagiarism." If you plagiarize in school, you run the possibility of being expelled. That is one possible model for how private regulation would operate in this context. https://ejtandemonium.com/
Even though ChatGPT has only been around for a couple of months, we have already spent the time since it was introduced arguing how strong it actually is and how we ought to govern it.
The artificial intelligence chatbot is being used by a substantial number of people to help them with research; message people on dating apps; create code; brainstorm ideas for work, and more.
Simply because something has the potential to be beneficial does not mean that it cannot also be detrimental: Students can use it to write essays for them, and bad actors can use it to generate viruses. Even in the absence of any malice on the part of its users, it is still capable of producing misleading information, reflecting biases, producing offensive content, storing sensitive information, and, according to the beliefs of some, lowering everyone's critical thinking skills as a result of over-reliance. Then there is the ever-present worry that robots will eventually take over, even though this concern is somewhat unwarranted.
And ChatGPT can accomplish all of that without much — if any — scrutiny from the U.S. government.
Nathan E. Sanders, a data scientist connected with the Berkman Klein Center at Harvard University, told Mashable that it's not that ChatGPT or AI chatbots in general are necessarily terrible. According to Sanders, "in the democracy space, there are a lot of excellent apps that assist them that would help our society." It isn't that AI or ChatGPT shouldn't be used, but that we need to verify it's being used appropriately. "Protecting vulnerable populations is the ideal goal for our organization. In that process, we want to ensure that the interests of underrepresented groups are protected to the extent possible, so that the wealthiest and most influential interests do not come out on top."
It is vital that something like ChatGPT be subject to regulation due to the fact that certain types of artificial intelligence can display a callous disregard for individual personal rights such as privacy and bolster systemic biases with regard to issues such as race, gender, ethnicity, age, and others. We also don't know, yet, where risk and liability may sit while using the technology.
"We either harness and govern AI to create a more utopian society or risk having an unbridled, unregulated AI push us toward a more dystopian future," Democratic California Rep. Ted Lieu said in a New York Times op-ed last week. In addition to this, he has presented a resolution to Congress that was penned in its whole by ChatGPT and urges the House of Representatives to back the regulation of AI. He used the following provocation: "You are Ted Lieu, a member of Congress. Write a comprehensive congressional resolution generally expressing support for Congress to work on AI."
All of this adds up to a rather murky future for restrictions on AI chatbots like ChatGPT. Some places are already setting regulations on the instrument. Massachusetts State Sen. Barry Finegold drafted a bill that would force corporations who utilize AI chatbots, like ChatGPT, to complete risk assessments and apply security measures to reveal to the government how their algorithms work. In order to combat plagiarism, the proposed legislation would mandate that the aforementioned tools include a watermark on their output.
Finegold was quoted as saying to Axios that because this is such a strong tool, there ought to be limitations.
In general, there are already certain regulations in place for artificial intelligence. The White House has made public a document titled "AI Bill of Rights," which essentially explains how safeguards that are currently codified in statute, such as privacy laws, civil rights, and civil liberties, relate to AI. The EEOC is taking on AI-based hiring tools for the risk that they could bias against protected classes. In the state of Illinois, firms who use AI in the employment process are required to give the government permission to test whether or not the tool exhibits racial bias. Many governments, like Vermont, Alabama, and Illinois, have commissions that try to guarantee that AI is being utilized ethically. A bill that prevents insurers from employing artificial intelligence that collects data that improperly discriminates based on protected classifications was recently enacted in the state of Colorado. And naturally, when it comes to regulating AI, the European Union is already ahead of the United States: in December of 2018, it passed the Artificial Intelligence Regulation Act. None of these restrictions are particular to ChatGPT or other AI chatbots.
While there are certain state-wide rules on AI, there isn't anything particular to chatbots like ChatGPT, neither state-wide nor nationwide. The National Institute of Standards and Technology, which is a part of the Department of Commerce, recently published a framework for artificial intelligence (AI) that is supposed to give businesses guidance on utilizing, designing, or deploying AI systems. However, this framework is entirely voluntary, and companies are not required to use it. There is no consequence for failing to follow through with it. A look into the future reveals that the Federal Trade Commission is likely developing new regulations that will apply to businesses that create and implement AI systems.
"Will there be some way for the federal government to make regulations or establish legislation to monitor and control this stuff? I think that is exceedingly, highly, incredibly unlikely," Dan Schwartz, an intellectual property associate with Nixon Peabody, told Mashable. It is quite unlikely that there will be any new federal regulations implemented in the near future. Schwartz believes that by the year 2023, the government will investigate the possibility of controlling the ownership of the products produced by ChatGPT. If you ask the tool to create code for you, for instance, do you own that code, or does OpenAI? http://sentrateknikaprima.com/
A private form of regulation is most likely going to be utilized in the academic sphere as the second type of regulation. According to Noam Chompsky, the contributions that ChatGPT has made to education can be compared to "high tech plagiarism." If you plagiarize in school, you run the possibility of being expelled. That is one possible model for how private regulation would operate in this context. https://ejtandemonium.com/