Remy Meraz, CEO and Co-Founder of Zella Life, is speaking out against the harmful impacts SB 1047 will have on her small business and fellow innovators throughout the California.
We need balanced AI legislation that promotes the good and addresses the risks.
Join us in urging Governor Newsom to veto SB 1047
-
Desc[00:00]
My concern around these bills is that startups like mine will be at a significant disadvantage, and it’s important that we have a seat at the table and have our voice heard.
[00:18]
We utilize AI to solve workforce development problems that companies have.
[00:24]
In being able to do it at scale, our company has created a model that actually saves companies time and money while increasing the overall value of their work product.
[00:37]
That’s what we are doing using AI, and it was built on my lived experiences in the corporate world.
[00:44]
And so I’m so passionate about the work that we’re doing to prevent this.
[00:55]
What AI has done for us is that it has allowed us to compete with bigger players in our space, right?
[01:01]
The Goliath of the coaching industry.
[01:04]
It has allowed us to compete and win business.
[01:13]
If the bigger companies are, you know, if it’s only limited to them, like it goes against what this country stands for, you know that you can start a business and you can ription text goes here
Updated Position Statement on California Senate Bill 1047 (SB 1047)
August 28, 2024
The AI Salon Policy & Civic Engagement Guild recognizes the critical importance of regulating AI to prevent harmful uses while also fostering innovation. As we consider California Senate Bill 1047 (SB 1047), we acknowledge the concerns raised by startups and small businesses, and we propose solutions that align with our mission to support ethical AI development and a vibrant, inclusive technology ecosystem.
1. Ethical AI Development and Public Safety
Issue: Startups and small companies express concern that SB 1047 regulates core AI technology rather than focusing on misuse, which could burden developers and stifle innovation.
Solution: We recommend amending the bill to concentrate regulation on high-risk applications of AI rather than general-purpose AI technology. This could involve specific use-case regulation, where the bill's requirements apply primarily to AI used in critical sectors such as healthcare or defense. Introducing exemptions for general-purpose AI models would reduce unnecessary regulatory burdens while still addressing potential harms
(U.S. Chamber of Commerce,Evrim Ağacı).
2. Innovation and Economic Impact
Issue: There is a fear that SB 1047 could hinder AI innovation by imposing heavy compliance obligations, particularly on smaller companies.
Solution: We propose implementing a tiered compliance framework that scales requirements based on the size and resources of the company and the potential risks associated with their AI applications. Risk-based tiering would adjust obligations according to the level of risk, allowing startups more flexibility. Additionally, offering innovation incentives, such as grants or tax credits, would encourage the development of safe and ethical AI without stifling creativity and growth
(Senator Scott Wiener,Evrim Ağacı).
3. Inclusivity, Transparency, and Market Fairness
Issue: The bill could empower larger AI companies at the expense of smaller innovators, creating an uneven playing field.
Solution: To address this, we advocate for the creation of regulatory sandboxes where startups can test AI models in a controlled environment with reduced regulatory burdens. This would allow smaller companies to innovate and refine their technologies before full compliance is required. Additionally, forming a diverse advisory board with representatives from startups, small businesses, and civil society would ensure that the bill's implementation is fair and inclusive
(LegiScan).
4. Predictability and Transparency in Implementation
Issue: Startups are concerned that the bill could deter investment and innovation by creating an unpredictable regulatory environment.
Solution: We recommend enhancing transparency and inclusivity in the regulatory process by mandating regular public consultations and developing clear, accessible compliance guidelines tailored to different types of businesses. This would reduce uncertainty and help smaller companies navigate their obligations under the law, encouraging continued investment and innovation
Conclusion
The AI Salon Policy & Civic Engagement Guild supports the goals of SB 1047 but calls for further amendments that address the legitimate concerns of startups and small businesses. By focusing regulation on misuse, scaling compliance requirements, ensuring fairness in the market, and enhancing transparency, we believe the bill can protect public safety while also fostering a dynamic, innovative AI sector in California. This balanced approach aligns with our mission to promote ethical AI development that benefits society as a whole.
Position on CA SB 1047/AB 2930
Executive Summary
California's proposed 2024 bills, SB 1047 and AB 2930, aim to regulate artificial intelligence with the goals of enhancing transparency, ensuring ethical use, and preventing potential harms. While these objectives are commendable, the bills in their current form risk hindering innovation, stifling economic growth, and creating significant compliance burdens.
Key concerns include:
Stifling Innovation and Economic Growth: Overly burdensome regulations may discourage AI development and lead to job losses.
Reduced Competition and Higher Costs: Compliance requirements favor large companies, potentially driving smaller startups out of the market.
Data Privacy Implications: Extensive reporting requirements could inadvertently expose sensitive data, conflicting with existing privacy laws.
Potential for Technological Lag: Overregulation might cause California to fall behind other regions in AI development.
Personal Risk for Developers: Compliance under penalty of perjury imposes significant personal risk on developers.
Impact on AI Research: Compliance concerns could lead to a brain drain, with researchers moving to more favorable regions.
International Competitiveness: Strict regulations could disadvantage California-based companies in the global AI market.
Please sign the letters of support!
Introduction
California's proposed 2024 bills, SB 1047 and AB 2930, aim to regulate artificial intelligence (AI) technologies with the objectives of enhancing transparency, ensuring ethical use, and preventing potential harms. These bills introduce stringent compliance requirements for AI developers, mandate detailed harm analysis, and impose personal liability on developers for non-compliance. While the intentions behind these bills are commendable, aiming to protect the public and ensure responsible AI development, their current form may lead to significant negative impacts and unintended consequences.
Here’s a closer look at the potential issues these bills might introduce:
Economic Impacts
Stifling Innovation and Economic Growth
Overly burdensome regulations may:
Discourage AI development in California.
Lead to job losses and reduced economic opportunities in the tech sector.
Cause the public to miss out on beneficial AI applications in healthcare, education, and other fields.
Quote from SB 1047: "Developers must ensure compliance with all reporting and harm analysis requirements under penalty of perjury."
Reduced Competition and Higher Costs
Complex compliance requirements favor large tech companies with more resources, potentially:
Forcing smaller companies and startups out of the market.
Reducing competition, which could result in higher prices for AI-powered products and services.
Limiting consumer choice and access to diverse AI solutions.
Potential for Technological Lag
Overregulation might cause California to fall behind in AI development compared to other regions. This could reduce the state's influence in shaping global AI standards and ethics.
International Competitiveness
Strict regulations might put California-based companies at a disadvantage in the global AI market. This could reduce the state's economic competitiveness and leadership in tech innovation.
Example: Compared to Silicon Valley, regions like Singapore and Canada have adopted more flexible AI regulations that encourage innovation while maintaining safety standards, positioning them as more attractive destinations for AI development.
Potential Brain Drain
The bills could discourage academic institutions from conducting cutting-edge AI research due to compliance concerns. This may lead to a brain drain, with researchers moving to other states or countries with more favorable conditions.
Technological Concerns
Regulating Technology Instead of Applications
SB 1047 makes the fundamental mistake of regulating AI technology rather than its applications. This approach:
Does not effectively address safety concerns.
Hampers the development and deployment of beneficial AI technologies.
Creates a one-size-fits-all regulatory framework for diverse AI applications.
Hampering Open-Source Development
Strict regulations could discourage open-source AI projects, which may:
Limit public access to AI tools.
Reduce transparency in AI development.
Slow down collaborative innovation in the AI field.
Overemphasis on Bias Prevention at the Expense of Functionality
While addressing bias is crucial, the bills' approach may lead to:
Overcautious AI systems that prioritize avoiding any potential bias over functionality.
Reduced effectiveness of AI in critical applications like medical diagnosis or safety systems.
A chilling effect on AI development in sensitive but important domains.
Slowing Down Emergency Response and Innovation
Strict compliance requirements might:
Delay the deployment of AI in emergency situations.
Hinder rapid responses to crises where AI could be beneficial.
Slow down the development and adoption of potentially life-saving AI technologies.
Adaptability to Rapid Technological Changes
The rigid nature of these bills might not be able to keep pace with the rapid advancements in AI technology. This could lead to outdated regulations that hinder rather than help as AI continues to evolve.
Please sign the letters of support!
Legal and Compliance Issues
Ambiguous and Complex Requirements
The bills include vague and complex reporting requirements, especially for developers fine-tuning models or developing models costing over $100 million to train. This ambiguity:
Creates a gray zone, making compliance difficult without risking significant penalties.
Favors large tech companies with resources to navigate complex regulations.
May discourage smaller companies and startups from innovating in the AI space.
Quote from AB 2930: "All AI developers must submit a detailed harm analysis report for each new model, subject to periodic review and revision."
Personal Risk for Developers
Developers are required to submit certifications of compliance under penalty of perjury. This:
Imposes considerable personal risk on developers.
Requires declarations of compliance with potentially shifting standards set by an unelected body.
May lead to overly cautious development practices, stifling innovation.
Unclear Harm Analysis
The requirement to analyze potential harms a model might cause or enable is problematic because:
Even leading AI researchers struggle to predict these harms accurately.
It's unrealistic and unfair to expect developers to provide such analyses under penalty of perjury.
This may lead to overly broad or speculative harm assessments, potentially restricting useful AI applications.
Protection Measures Against Misuse
The bills require developers to implement protections against misuse or unsafe modifications of AI models. However:
There's no consensus among AI researchers on the best ways to protect AI models from such risks.
This lack of clarity makes it nearly impossible for developers to ensure compliance.
It may lead to inconsistent or ineffective protection measures across different AI systems.
Data Privacy Implications
The extensive reporting requirements might inadvertently expose sensitive data. This could conflict with existing privacy laws and erode public trust in AI systems.
Potential for Regulatory Capture
The complexity of the regulations may lead to:
Heavy involvement of tech companies in shaping enforcement.
Regulations that favor industry interests over public benefit.
A disconnect between regulatory intent and practical implementation.
Geographical Limitations
California-specific regulations may:
Create a patchwork of rules across states, confusing consumers and complicating interstate commerce.
Put California-based companies at a disadvantage compared to those in other states or countries.
Potentially drive AI development and related economic benefits to other jurisdictions.
Public Trust and Perception
Overly complex regulations might increase public skepticism about AI rather than building trust. This could lead to slower adoption of beneficial AI technologies in crucial sectors like healthcare and education.
Positive Intentions Behind the Bills
The objectives of SB 1047 and AB 2930 are rooted in a desire to protect the public from potential harms associated with AI, ensure ethical AI development, and promote transparency in AI systems. These goals reflect a proactive stance towards managing the risks of AI technologies, which is commendable. However, the implementation and specific provisions of these bills need to be carefully reconsidered to avoid stifling innovation and creating undue burdens on developers and businesses.
Alternative Approaches to Improve AI Safety
Instead of the current proposals, regulators could pursue more effective routes to enhance AI safety, such as:
Outlawing Specific Harmful Applications: Focus on regulating clearly harmful uses of AI, such as non consensual deep fake pornography. For instance, Germany has implemented strict laws against deepfake pornography, providing a clear example of targeted regulation that addresses specific harms.
Standardizing Watermarking and Fingerprinting: Support the development and implementation of technologies to identify AI-generated content and prevent misuse. The European Union's efforts with the AI Act, which includes provisions for transparency and accountability in AI-generated content, serve as a useful model.
Investing in Safety Research: Allocate resources to support red teaming and other safety research initiatives to better understand and mitigate AI risks. The National Institute of Standards and Technology (NIST) in the US has been leading efforts in this area, promoting rigorous testing and evaluation of AI systems.
Encouraging Industry Self-Regulation: Promote the development of industry standards and best practices for responsible AI development. Initiatives like the Partnership on AI, which brings together various stakeholders to establish best practices, can be an effective way to foster responsible AI use.
Focusing on Public Education: Invest in AI literacy programs to help the public understand the capabilities and limitations of AI technologies. Finland's free online course, "Elements of AI," has been a successful example, educating citizens about AI and its implications.
Supporting International Collaboration: Work towards harmonized AI governance frameworks that promote innovation while addressing global AI safety concerns. The Global Partnership on AI (GPAI) is an example of international collaboration aimed at promoting responsible AI development.
Conclusion
While the intent behind SB 1047 and AB 2930 is to protect the public interest, their current form may lead to unintended consequences that could ultimately harm rather than help the public. A more effective approach would focus on regulating specific high-risk AI applications, supporting ongoing safety research, and fostering a collaborative environment for responsible AI development. By addressing these concerns and incorporating alternative approaches, California could reshape its regulatory framework to promote innovation while effectively addressing legitimate safety concerns.