Principal GenAI Security Engineer
Disney Experiences
2024-11-13 07:54:28
Seattle, Washington, United States
Job type: fulltime
Job industry: I.T. & Communications
Job description
"We Power the Magic!" That's our motto at Disney Experiences. We deliver experiences to consumers through our Disney's Parks & Resorts worldwide, Disney Cruise Lines, and Disney Vacation Club. We are responsible for the end-to-end digital and physical Guest experience for all technology initiatives across the Attractions & Entertainment, Food & Beverage, Resorts & Transportation and Merchandise lines of business. We are a global team comprised of nearly 3,000 technology, product, data, and operations professionals across four locations, focusing on the development and implementation of digital and technology strategies for Disney Experiences businesses around the world.
This role sits in a JedAI Platform team and as a Principal, Security Engineer, you will be responsible for ensuring the reliable, safe, and ethical deployment of high-quality AI solutions within highly visible strategic projects and requires active collaboration with cross-functional teams.
We're looking for someone with passion for Generative AI and LLMs, with a proven track record of ethical hacking or red teaming. The ideal candidate would have experience working with cutting-edge AI technologies and an awareness of emerging threats in the AI landscape.
Responsibilities:
Own the development of a dedicated responsible AI red teaming model ensuring reliable, safe, and ethical deployment of AI products
Develop and maintain a sophisticated and highly scalable toolkit for generating automated prompts against AI products and producing readiness assessment artifacts.
Own defining and standardizing reliable, responsible, governable, traceable, and equitable AI capabilities across the organization.
Discover and exploit Responsible AI vulnerabilities end-to-end to assess the safety of systems
Leverage a broad stack of technologies to develop methodologies and techniques to scale and accelerate responsible AI Red Teaming
Exercise full autonomy to interact with diverse teams, including enterprise cyber/ security teams to conduct in-depth inspections of vulnerabilities in AI systems, data, and associated networks.
Research and develop Responsible AI evaluation methods to improve the quality of user-facing AI products.
Create evaluation datasets to solve complex, non-routine analysis problems.
Conduct analyses involving data gathering, requirements specification, processing, and presentation of results.
Build and prototype analysis pipelines iteratively to provide large-scale insights.
Develop extensive knowledge of AI-related data structures and metrics, advocating for necessary changes in product development.
Lead and influence engineering teams to implement the frameworks and tools to support LLM application deployment.
Engage in the estimation and planning of AI projects, influencing the direction, prioritization of both design and development through a Responsible AI filter.
Requirements:
10+ years of related work experience
Strong programming skills, including proficiency in data-querying (SQL, Spark) and scripting languages for data processing (Python, R, or Scala).
Extensive experience in data science, machine learning, and analytics, including statistical data analysis and A/B testing.
Proven ability to craft, conduct, analyze, and interpret experiments and investigations.
Strong interpersonal skills with the ability to explain complex technical topics to diverse audiences, from data scientists to business partners.
Demonstrated experience in leading and motivating teams in AI and software engineering environments.
Expertise in ethical AI deployment and familiarity with relevant frameworks and principles.
Proven track record of contributing to diverse teams in collaborative environments.
Strong self-motivation, work ethic, and time management skills.
Passion for building innovative and outstanding AI products.
Required Education:
Bachelor's degree and/or equivalent work experience
Preferred Education:
Master's degree and/or equivalent work experience
The hiring range for this position in Glendale, CA/Anaheim, CA is $167,800 to $225,000 per year and in Seattle, WA is $175,800 to $235,700 per year. The base pay actually offered will take into account internal equity and also may vary depending on the candidate's geographic region, job-related knowledge, skills, and experience among other factors. A bonus and/or long-term incentive units may be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits, dependent on the level and position offered.