Anthropic
San Francisco, CA, USA
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. We’re building a team that will research and mitigate extreme risks from future models .
This team will intensively red-team models to test the most significant risks they might be capable of in areas such as biosecurity, cybersecurity risks, or autonomy. We believe that clear demonstrations can significantly advance technical research and mitigations, as well as identify effective policy interventions to promote and incentivize safety.
As part of this team, you will lead research to baseline current models and test whether future frontier capabilities could cause significant harm. Day-to-day, you may decide you need to finetune a model...