Connect with us

News

Frontier Model Forum

Published

on


Governments and industry agree that, while AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks. Important contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others. 

To build on these efforts, further work is needed on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly. The Forum will be one vehicle for cross-organizational discussions and actions on AI safety and responsibility.  

The Forum will focus on three key areas over the coming year to support the safe and responsible development of frontier AI models:

  • Identifying best practices: Promote knowledge sharing and best practices among industry, governments, civil society, and academia, with a focus on safety standards and safety practices to mitigate a wide range of potential risks. 
  • Advancing AI safety research: Support the AI safety ecosystem by identifying the most important open research questions on AI safety. The Forum will coordinate research to progress these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection. There will be a strong focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.
  • Facilitating information sharing among companies and governments: Establish trusted, secure mechanisms for sharing information among companies, governments and relevant stakeholders regarding AI safety and risks. The Forum will follow best practices in responsible disclosure from areas such as cybersecurity.


Kent Walker, President, Global Affairs, Google & Alphabet said: “We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone.”

Brad Smith, Vice Chair & President, Microsoft said: “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

Anna Makanju, Vice President of Global Affairs, OpenAI said: “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.” 

Dario Amodei, CEO, Anthropic said: “Anthropic believes that AI has the potential to fundamentally change how the world works. We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”



Source link

News

We’re bringing the Financial Times’ world-class journalism to ChatGPT

Published

on


Editor’s note: This news was originally shared by the Financial Times and can be read here.  

The Financial Times today announced a strategic partnership and licensing agreement with OpenAI, a leader in artificial intelligence research and deployment, to enhance ChatGPT with attributed content, help improve its models’ usefulness by incorporating FT journalism, and collaborate on developing new AI products and features for FT readers. 

Through the partnership, ChatGPT users will be able to see select attributed summaries, quotes and rich links to FT journalism in response to relevant queries. 

In addition, the FT became a customer of ChatGPT Enterprise earlier this year, purchasing access for all FT employees to ensure its teams are well-versed in the technology and can benefit from the creativity and productivity gains made possible by OpenAI’s tools. 

“This is an important agreement in a number of respects,” said FT Group CEO John Ridding. “It recognises the value of our award-winning journalism and will give us early insights into how content is surfaced through AI. We have long been a leader in news media innovation, pioneering the subscription model and engagement technologies, and this partnership will help to keep us at the forefront of developments in how people access and use information.” 

“The FT is committed to human journalism, as produced by our unrivalled newsroom, and this agreement will broaden the reach of that work, while deepening our understanding of reader demands and interests,” Ridding added. “Apart from the benefits to the FT, there are broader implications for the industry. It’s right, of course, that AI platforms pay publishers for the use of their material. OpenAI understands the importance of transparency, attribution, and compensation – all essential for us. At the same time, it’s clearly in the interests of users that these products contain reliable sources.” 

Brad Lightcap, COO of OpenAI, expressed enthusiasm about the evolving relationship with the Financial Times, stating: “Our partnership and ongoing dialogue with the FT is about finding creative and productive ways for AI to empower news organisations and journalists, and enrich the ChatGPT experience with real-time, world-class journalism for millions of people around the world.” 

“We’re keen to explore the practical outcomes regarding news sources and AI through this partnership,” said Ridding. “We value the opportunity to be inside the development loop as people discover content in new ways. As with any transformative technology, there is potential for significant advancements and major challenges, but what’s never possible is turning back time. It’s important for us to represent quality journalism as these products take shape – with the appropriate safeguards in place to protect the FT’s content and brand. 

We have always embraced new technologies and disruption, and we’ll continue to operate with both curiosity and vigilance as we navigate this next wave of change.”



Source link

Continue Reading

News

Introducing more enterprise-grade features for API customers

Published

on


To help organizations scale their AI usage without over-extending their budgets, we’ve added two new ways to reduce costs on consistent and asynchronous workloads:

  • Discounted usage on committed throughput: Customers with a sustained level of tokens per minute (TPM) usage on GPT-4 or GPT-4 Turbo can request access to provisioned throughput to get discounts ranging from 10–50% based on the size of the commitment.
  • Reduced costs on asynchronous workloads: Customers can use our new Batch API to run non-urgent workloads asynchronously. Batch API requests are priced at 50% off shared prices, offer much higher rate limits, and return results within 24 hours. This is ideal for use cases like model evaluation, offline classification, summarization, and synthetic data generation.


We plan to keep adding new features focused on enterprise-grade security, administrative controls, and cost management. For more information on these launches, visit our API documentation or get in touch with our team to discuss custom solutions for your enterprise.



Source link

Continue Reading

News

adopting safety by design principles

Published

on


OpenAI, alongside industry leaders including Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, and Stability AI, has committed to implementing robust child safety measures in the development, deployment, and maintenance of generative AI technologies as articulated in the Safety by Design principles. This initiative, led by Thorn, a nonprofit dedicated to defending children from sexual abuse, and All Tech Is Human, an organization dedicated to tackling tech and society’s complex problems, aims to mitigate the risks generative AI poses to children. By adopting comprehensive Safety by Design principles, OpenAI and our peers are ensuring that child safety is prioritized at every stage in the development of AI. To date, we have made significant effort to minimize the potential for our models to generate content that harms children, set age restrictions for ChatGPT, and actively engage with the National Center for Missing and Exploited Children (NCMEC), Tech Coalition, and other government and industry stakeholders on child protection issues and enhancements to reporting mechanisms. 

As part of this Safety by Design effort, we commit to:

  1. Develop: Develop, build, and train generative AI models
    that proactively address child safety risks.

    • Responsibly source our training datasets, detect and remove child sexual
      abuse material (CSAM) and child sexual exploitation material (CSEM) from
      training data, and report any confirmed CSAM to the relevant
      authorities.
    • Incorporate feedback loops and iterative stress-testing strategies in
      our development process.
    • Deploy solutions to address adversarial misuse.
  2. Deploy: Release and distribute generative AI models after
    they have been trained and evaluated for child safety, providing protections
    throughout the process.

    • Combat and respond to abusive content and conduct, and incorporate
      prevention efforts.
    • Encourage developer ownership in safety by design.
  3. Maintain: Maintain model and platform safety by continuing
    to actively understand and respond to child safety risks.

    • Committed to removing new AIG-CSAM generated by bad actors from our
      platform. 
    • Invest in research and future technology solutions.
    • Fight CSAM, AIG-CSAM and CSEM on our platforms.

This commitment marks an important step in preventing the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. As part of the working group, we have also agreed to release progress updates every year.



Source link

Continue Reading
Advertisement
SEO & Digital Marketing7 hours ago

How Bot Traffic Is Impacting Digital Marketing?

SEO & Digital Marketing1 day ago

A Complete Guide to Using Original Research to Elevate B2B Content Marketing – TopRank® Marketing

SEO & Digital Marketing1 day ago

The New Creative Hub: Dubai’s Agency Scene Evolves

SEO & Digital Marketing3 days ago

Elevating Your Digital Brand Through Strategic Visual Storytelling

SEO & Digital Marketing3 days ago

Adapting to Google’s New Reality – TopRank® Marketing

SEO & Digital Marketing3 days ago

AI vs Marketing Agencies: Threats & Gains

SEO & Digital Marketing3 days ago

47 Must-Know Real Estate Digital Marketing Statistics for 2024

SEO & Digital Marketing7 days ago

Find New Web Design Clients in 2024 with These 8 Proven Strategies

SEO & Digital Marketing1 week ago

Try These 7 B2C Influencer Marketing Tactics for B2B Success – TopRank® Marketing

SEO & Digital Marketing1 week ago

11 Creative Examples & 4 Research-Based Insights

SEO & Digital Marketing1 week ago

3 Ways to Increase Video Conversion Rate

SEO & Digital Marketing1 week ago

Content Localization Tips From the Experts – TopRank® Marketing

SEO & Digital Marketing1 week ago

Best Marketing Memes for Industry Insiders & Virality

SEO & Digital Marketing2 weeks ago

Here’s How 11 Expert Marketers Define B2B Influencer Marketing – TopRank® Marketing

SEO & Digital Marketing2 weeks ago

115 Digital Marketing Agency Names & Ideas (with an Agency Name Generator!)

SEO & Digital Marketing2 weeks ago

What’s the Catch with Google AI Overview?

SEO & Digital Marketing2 weeks ago

The Complete Guide to Food Influencer Marketing Strategies for 2024 (8 Steps to Success!)

SEO & Digital Marketing3 weeks ago

9 Inspiring Back to School Marketing Ideas in 2024 [5 Campaigns Included!]

SEO & Digital Marketing3 weeks ago

How Brands and Agencies Connect

SEO & Digital Marketing3 weeks ago

How Strong SEO Strategies Will Boost Your Lead conversion – TopRank® Marketing

Marketing Strategy
SEO & Digital Marketing8 months ago

8 Inspiring Nike Marketing Campaigns Fueled by Powerful Digital Strategies

Dall E 2 Pre Training
News12 months ago

DALL·E 2 pre-training mitigations

SEO & Digital Marketing10 months ago

25 B2B Influencer Marketing Experts To Follow In 2024

Dall E 3 System
News12 months ago

DALL·E 3 system card

key
AI Trends12 months ago

Unlocking the Potential: Revolutionary AI Trends Expected in 2023

Shutterstock
SEO & Digital Marketing11 months ago

Grow Your Base: B2B Market Entry Strategies

Sea Snail
AI Case Studies12 months ago

AI Case Study: How Chatbots Revolutionize Customer Support

Melancholia
AI Basics12 months ago

Delving into the Science: How Basic AI Neural Networks Learn and Adapt

Graffiti
AI Trends12 months ago

AI Breakthroughs to Transform Industries: Unveiling the 2023 Trends

SEO & Digital Marketing12 months ago

Digital Marketing Agencies for Sale in 2023

Newtons Cradle
AI Tutorials and Guides12 months ago

Illustrator Mastery: Discover the Secrets Behind Professional Digital Art

Cafe
AI Basics12 months ago

The Building Blocks of AI: Understanding the Basics of Neural Networks

AI Generated
AI Trends11 months ago

AI’s Paradigm Shift: Anticipated Trends in 2023 and Beyond

Laptop
AI Basics12 months ago

Getting Started with Basic Artificial Intelligence: A Primer on Neural Networks

Novice
AI Tutorials and Guides12 months ago

From Novice to Pro: Beginner’s Guide to Illustrator Tutorials

Cake
AI Tutorials and Guides11 months ago

Illustrator Step-by-Step: Essential Tutorial Guides for Every Skill Level

Chateau
AI Tools Review12 months ago

Enhancing Efficiency with AI: A Review of the Most Promising Tools

Niagara Falls
AI Case Studies12 months ago

AI-powered Power Plants: Case Studies on Optimizing Energy Generation

key
AI Basics12 months ago

Unlocking Potential: Harnessing the Power of Basic Artificial Intelligence Neural Networks

Woman
AI Tools Review12 months ago

The AI Revolution: Unveiling the Top Tools Transforming Industries

Trending