Skip to content
Photo of Lee Tiedrich

Lee Tiedrich brings together an undergraduate education in electrical engineering and over twenty years of legal experience to assist clients on a broad range of intellectual property and technology transaction matters. Her work spans several industries, including ehealth, life sciences, consumer products, communications and media. She counsels both private and public companies, as well as venture capital firms and corporate venture groups in their investments. Ms. Tiedrich has extensive experience negotiating complex intellectual property acquisition, licensing, and development agreements, and regularly counsels clients on strategic issues, such as developing and maintaining intellectual property portfolios and evaluating and addressing intellectual property-related assets and risks.

A foundation of intellectual property rights (IPR) is that authors and inventors are entitled to some level of exclusivity over their works in the form of copyrights and patents to incentivize innovation; that’s written into the Constitution. However, various voluntary open innovation practices have emerged, highlighting that developers also can benefit by choosing to widely share certain intellectual property in ways that also can help foster innovation.

While there is no “one size fits all approach,” with the growth of artificial intelligence (AI), there has been a trend to similarly facilitate more voluntary data sharing. Especially considering how AI is being used to address the COVID-19 pandemic and other important needs, voluntary open access to data could have a significant impact in the immediate future. However, practices for voluntarily sharing or providing open access to data are still developing and vary widely (in part because of the state of IPR protection for data). These evolving practices create some challenges for data contributors and users alike. However, the challenges often can be overcome by carefully selecting contract terms to govern the data sharing arrangement that factor in the goals and needs of the participants and relevant legal principles.


Continue Reading Look for Voluntary Open Data Practices to Follow Other Open IP Trends

The newly enacted National Defense Authorization Act (“NDAA”) contains important provisions regarding the development and deployment of artificial intelligence (“AI”) and machine learning technologies, many of which build upon previous legislation introduced in the 116th Congress. The most substantial federal U.S. legislation on AI to date, these provisions will have significant implications in the national security sector and beyond. The measures in the NDAA will coordinate a national strategy on research, development, and deployment of AI, guiding investment and aligning priorities for its use.

President Trump had vetoed the NDAA after its initial passage in December, but the $740 billion NDAA became law over the objection of President Trump’s veto with a rare New Year’s Day Senate vote, 81-13. The House voted to override President Trump’s veto on December 28, on a 322-87 vote.

This post highlights some of the key AI provisions included in the NDAA.
Continue Reading AI Update: Provisions in the National Defense Authorization Act Signal the Importance of AI to American Competitiveness

President Donald Trump signed an executive order (EO) on December 3, providing guidance for federal agency adoption of artificial intelligence (AI) for government decision-making in a manner that protects privacy and civil rights.

Emphasizing that ongoing adoption and acceptance of AI will depend significantly on public trust, the EO charges the Office of Management and Budget with charting a roadmap for policy guidance by May 2021 for how agencies should use AI technologies in all areas excluding national security and defense.  The policy guidance should build upon and expand existent applicable policies addressing information technology design, development, and acquisition.


Continue Reading AI Update: New Executive Order on Promoting the Use of Artificial Intelligence in Federal Agencies Pushes Developing Public Trust for Future Expansion

The National Institute of Standards and Technology (“NIST”) has published the first draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), a white paper that seeks to define the principles that capture the fundamental properties of explainable AI systems.  AI Initiative Co-Chair Lee Tiedrich, Sam Choi, and James Yoon discuss

In this edition of our regular roundup on legislative initiatives related to artificial intelligence (AI), cybersecurity, the Internet of Things (IoT), and connected and autonomous vehicles (CAVs), we focus on key developments in the European Union (EU).


Continue Reading AI, IoT, and CAV Legislative Update: EU Spotlight (Third Quarter 2020)

The National Institute of Standards and Technology (“NIST”) is seeking comments on the first draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), a white paper that seeks to define the principles that capture the fundamental properties of explainable AI systems.  NIST will be accepting comments until October 15, 2020.

In February 2019, the Executive Order on Maintaining American Leadership in Artificial Intelligence directed NIST to develop a plan that would, among other objectives, “ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.”  In response, NIST issued a plan in August 2019 for prioritizing federal agency engagement in the development of AI standards, identifying seven properties that characterize trustworthy AI—accuracy, explainability, resiliency, safety, reliability, objectivity, and security.

NIST’s white paper focuses on explainability and identifies four principles underlying explainable AI.


Continue Reading AI Standards Update: NIST Solicits Comments on the Four Principles of Explainable Artificial Intelligence and Certain Other Developments

On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.


Continue Reading AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI

On June 4, 2020, Representatives Anna Eshoo (D-CA-18), Anthony Gonzalez (R-OH-16), and Mikie Sherrill (D-NJ-11) introduced the National AI Research Resource Task Force Act.  This bipartisan bill would create a task force to propose a roadmap for developing and sustaining a national research cloud for AI.  The cloud would help provide researchers with access to computational resources and large-scale datasets to foster the growth of AI.

“AI is shaping our lives in so many ways, but the true potential of it to improve society is still being discovered by researchers,” explained Rep. Eshoo. “I’m proud to introduce legislation that reimagines how AI research will be conducted by pooling data, compute power, and educational resources for researchers around our country.  This legislation ensures that our country will retain our global lead in AI.”


Continue Reading Bipartisan Bill Seeks to Create National Artificial Intelligence Research Resource Task Force

The COVID-19 pandemic is accelerating the digital transition and the adoption of artificial intelligence (“AI”) tools and Internet of Things (“IoT”) devices in many areas of society. While there has been significant focus on leveraging this technology to fight the pandemic, the technology also will have broader and longer-term benefits. As the New York Times has explained, “social-distancing directives, which are likely to continue in some form after the crisis subsides, could prompt more industries to accelerate their use of automation.”

For businesses proceeding with reopenings over the coming weeks and months, and for sectors that have continued to operate, AI and IoT technologies can greatly improve the way they manage their operations, safely engage with customers, and protect employees during the COVID-19 crisis and beyond. But businesses also should take steps to ensure that their use of AI and IoT technologies complies with the evolving legal requirements that can vary based on several factors, including the industry sector where the technology is deployed and the jurisdiction where it is used. Businesses also will want to have mechanisms in place to help ensure that the technology is used appropriately, including appropriate oversight and workforce training and other measures.


Continue Reading Return to Workplace Considerations for Businesses Using AI and IoT Technologies

We and the third parties that provide content, functionality, or business services on our website may use cookies to collect information about your browsing activities in order to provide you with more relevant content and promotional materials, on and off the website, and help us understand your interests and improve the website. Privacy Policy

AcceptReject