BlockBeats reported that on February 27, OpenAI, the developer of ChatGPT, recently stated that the company has banned and removed accounts from North Korean users suspected of using its technology for malicious activities, including surveillance and propaganda manipulation. In a report, OpenAI pointed out that these activities are ways in which authoritarian regimes may use AI technology to control the United States and their own people. The company also added that they use AI tools to detect these malicious operations. OpenAI did not disclose the specific number of banned accounts or the timeframe of the related actions. In one instance, malicious actors possibly related to North Korea used AI to generate false resumes and online profiles of job seekers with the aim of fraudulently applying for positions in Western companies. Additionally, there is a group of ChatGPT accounts suspected of being related to financial fraud in Cambodia, using OpenAI technology to translate and generate comments on social media and communication platforms (including X and Facebook).
Voir l'original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
OpenAI a banni le groupe de pirates nord-coréens suspecté d'activités malveillantes compte
BlockBeats reported that on February 27, OpenAI, the developer of ChatGPT, recently stated that the company has banned and removed accounts from North Korean users suspected of using its technology for malicious activities, including surveillance and propaganda manipulation. In a report, OpenAI pointed out that these activities are ways in which authoritarian regimes may use AI technology to control the United States and their own people. The company also added that they use AI tools to detect these malicious operations. OpenAI did not disclose the specific number of banned accounts or the timeframe of the related actions. In one instance, malicious actors possibly related to North Korea used AI to generate false resumes and online profiles of job seekers with the aim of fraudulently applying for positions in Western companies. Additionally, there is a group of ChatGPT accounts suspected of being related to financial fraud in Cambodia, using OpenAI technology to translate and generate comments on social media and communication platforms (including X and Facebook).