Federal and Local Privacy and Security Update for September 2024

Welcome to this month”s issue of The BR Privacy & Security Download, the digital newsletter of Blank Rome’s Privacy, Security & Data Protection practice.


STATE & LOCAL LAWS & REGULATION

Illinois Amends State Human Rights Act to Regulate Use of AI in Employment Settings
On August 9, Illinois Governor J.B. Pritzker signed HB 3773 into law, prohibiting the use of artificial intelligence (“AI”) in a manner that results in unlawful discrimination in the employment context. Unlike other states—such as California and Colorado—whose current and draft AI legislation and regulations require bias audits, risk assessments, and related governance measures surrounding the use of AI in the employment or automated decision-making context, this Illinois Human Rights Act makes it a civil rights violation for employers to use AI in a manner that discriminates based on a protected classification. However, companies will likely need to implement these governance tools in order to identify the systems that could qualify as covered AI systems under the Act and to identify whether the use of the tool could result in discrimination. The Act becomes effective on January 1, 2026.

California Legislature Passes AI Bills
California’s legislature passed three bills regulating the development of AI systems prior to the close of its legislative session on August 30. SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, applies to “covered models,” which are AI models trained using specified thresholds of computing power, the cost of which exceeds $100 million. Developers of these foundational AI models would be required to comply with a number of safety requirements, including implementing the capability to fully shut down the AI model, implement a written safety and security protocol, and retain a third-party auditor to verify the developer’s compliance with SB 1047, among other things. AB 2013, the Artificial Intelligence Training Data Transparency Act, would require developers of generative AI systems to post on the developer’s internet website documentation regarding the data used to train the generative AI system, including a high-level summary of the datasets used. Finally, SB 942, the California AI Transparency Act, would require a covered provider to make available an AI detection tool at no cost to users and to offer users an option to provide disclosure of AI-generated content. All three bills have been sent to California Governor Newsom’s desk. The Governor has until September 30, 2024, to sign or veto the bills.

New Hampshire AG Introduces New Data Privacy Unit to Enforce State Privacy Act
The New Hampshire Attorney General’s Office announced the creation of a new Data Privacy Unit (the “Unit”), which will be housed within the Consumer Protection and Antitrust Bureau and primarily responsible for enforcing compliance with the New Hampshire Data Privacy Act (the “Act”) going into effect on January 1, 2025. The Unit is expected to publish a series of FAQs in the next few months to assist New Hampshire consumers and covered businesses in understanding consumers’ new rights over their personal data and the responsibilities of those businesses in complying with the Act or risk being subject to civil penalties of up to $10,000 per violation of the Act or, for certain willful violations, criminal fines of up to $100,000 per violation.

New York Attorney General Issues Advanced Notice of Proposed Rulemaking for the SAFE for Kids Act
The Office of the New York State Attorney General (“OAG”) released an Advanced Notice of Proposed Rulemaking (“ANPRM”) for the Stop Addictive Feeds Exploitation (“SAFE”) for Kids Act. The SAFE for Kids Act prohibits covered operators from providing addictive feeds to minors under 18, using data concerning the minor to personalize the material the minor sees, unless they use commercially reasonable and technically feasible methods to verify that the user is not a minor or obtain verified parental consent. The SAFE for Kids’ ANPRM seeks public comments on several key areas, including analyzing age verification methods, defining “verifiable parental consent,” identifying factors to categorize addictive social media platforms and their feeds, determining the scope of addictive feeds, and ensuring language access for parental consent. Stakeholders can submit their comments in response to the ANPRM to [email protected] by September 30, 2024, after which the OAG will initiate the formal rulemaking process. 

New York Attorney General Issues Advanced Notice of Proposed Rulemaking for the New York Child Data Protection Act
Simultaneous to the SAFE for Kids Act ANPRM, the OAG released an ANPRM for the Child Data Protection Act (the “CDPA”). The CDPA prohibits covered operators from collecting and processing the personal information of minors 13 and under unless such processing is allowed under the Children’s Online Privacy Protection Act. The CDPA further prohibits covered operators from processing the personal information of minors between the ages of 13 and 18, unless the operator has obtained “informed consent,” or the processing is “strictly necessary” for certain purposes. The CDPA’s ANPRM seeks public comments on topics, including defining what constitutes processing that is strictly necessary to be permissible without requiring specific consent and exploring methods for obtaining informed consent and parental consent. Like the SAFE for Kids’ ANPRM, stakeholders can submit their comments in response to the CDPA’s ANPRM to [email protected] by September 30, 2024, after which the OAG will initiate the formal rulemaking process.


FEDERAL LAWS & REGULATION

Department of Defense Issues Rule Regarding Cybersecurity Maturity Model Certification Implementation
The U.S. Department of Defense (“DOD”) issued a proposed rule outlining how its pending Cybersecurity Maturity Model Certification (“CMMC”) program will be incorporated and applied to defense contractors. The CMMC was previously outlined in December 2023 and is intended to help protect sensitive, nonpublic federal information. The CMMC will be phased in over three years following a final rule issuance. Once the phase-in period ends, minimum CMMC requirements will be included in nearly all defense contracts and solicitations that involve processing, storing, or transmitting federal contract information and controlled unclassified information. The new proposed rule provides additional assurances to the DOD that a defense contractor can adequately protect sensitive unclassified information. The rule also supports the protection of intellectual property from malicious acts that have a significant impact on the U.S. economy and national security.

Senate Passes Kids Online Safety Act
On July 30, the U.S. Senate passed a bill combining the Kids Online Safety and Privacy Act (“KOSA”) and the Children and Teens’ Online Privacy Protection Act (“CTOPPA”). KOSA would establish an affirmative “duty of care” for covered online platforms “reasonably likely” to be used by a minor under the age of 17. This duty would require covered platforms to take reasonable measures to mitigate harm to minors, including the threats of certain mental health disorders, physical violence, online bullying and harassment, sexual exploitation, and other forms of abuse. CTOPPA, on the other hand, establishes new rights and requirements applicable to teens between the ages of 13 and 17 and their parents and guardians. CTOPPA’s requirements are closer to those found in state online privacy laws and include deletion and correction rights, prohibitions on targeted advertising to minors, data minimization requirements, and limited data localization requirements. Combined, KOSA and CTOPA would significantly increase the number of businesses required to comply with federal children’s privacy requirements and the obligations of those entities that that already comply. The bill is currently before the U.S. House of Representatives and is expected to be considered in September.

Senate Committee Advances Several AI Bills
The Senate Commerce, Science, and Technology Committee advanced several bipartisan AI-focused bills. The committee advanced nine AI bills, including the Future of AI Innovation Act, the CREATE AI Act, the NSF AI Education Act (endorsed by OpenAI), and the VET AI Act and TEST AI Act, both supported by Google. Senator Ted Cruz criticized some of the bills, arguing they could result in over-regulation of AI in the United States and stifle innovation while favoring a few large companies. Committee discussions show the global competition for taking the lead in AI while balancing concerns for public safety.

FCC Proposes First AI-Generated Robocall & Robotext Rules
The FCC released its Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts, proposing and seeking comments on a new regulatory framework to protect consumers from AI robocalls and encouraging positive uses of AI technology. The proposed rules, if adopted, would define “AI-generated call” and place new opt-out and transparency requirements on callers who intend to use AI-generated content (such as AI-generated voices) in calls and text messages to consumers. FCC Chairwoman Jessica Rosenworcel reaffirmed that the FCC’s rulemaking work on AI is grounded in the key principle of transparency, while also considering how these AI technologies may be harnessed to help people with speech or hearing disabilities or to better protect consumers against telephone scams.


U.S. LITIGATION

HHS Withdraws Appeal on Tracking Technology Guidance Case
The U.S. Department of Health and Human Services (“HHS”) has withdrawn its appeal of the U.S. District Court for the Northern District of Texas (the “Court”) decision ordering HHS to rescind its guidance on the use of tracking technologies by covered entities and business associates under the Health Insurance Portability and Accountability Act (“HIPAA”). The guidance suggested that an online tracking technology connecting the IP address of a user’s device or other identifying information with a visit to an unauthenticated webpage addressing specific health conditions would be sufficient to constitute protected health information (“PHI”) under HIPAA. The Court held that metadata (e.g., IP address) input by website users into a HIPAA-regulated entity’s unauthenticated, publicly facing webpage does not constitute PHI, and the Court vacated the guidance to this extent. HHS filed an appeal but withdrew the appeal without any further comment.

9th Circuit Issues Ruling on California Age-Appropriate Design Code Act
On August 16, the Ninth Circuit partially upheld a ruling issued by a California district court granting a preliminary injunction blocking enforcement of the California Age-Appropriate Design Code Act (“CADCA”). CADCA was unanimously passed in 2022 and was scheduled to become effective on July 1, 2024. NetChoice—an industry group comprised of entities such as Google, Meta, Netflix, and X—filed a lawsuit challenging the Act in late 2022 and successfully moved for a preliminary injunction in 2023, arguing that CADCA violated the First Amendment by unlawfully censoring online speech. The California district court found that the data protection impact assessment (“DPIA”) requirements of the law likely violated the First Amendment and that these DPIA requirements could not reasonably be severed from the rest of the Act. On review, the Ninth Circuit largely agreed with the district court’s finding that the DPIA provisions of the Act likely violate the First Amendment. Therefore, it affirmed the preliminary injunction with respect to these portions of the Act. However, the panel did not agree that the DPIA provisions could not be severed from other portions of the law. The panel instead found that the record had not been sufficiently developed to reach a determination. Therefore, it remanded the remainder of the matter to the district court for further proceedings. This ruling leaves the fate of the CADCA somewhat in the lurch, with both sides of the dispute calling the ruling a victory. Businesses potentially subject to the Act should keep an eye peeled for further developments in this matter, as well as the potential impacts of this ruling on other provisions of California privacy regulations requiring DPIAs.

FTC Files Amicus Brief Regarding COPPA and Arbitration
The Federal Trade Commission (“FTC”) filed an amicus brief in Shanahan v. IXL Learning, Inc., a lawsuit brought by parents against IXL Learning, Inc. (“IXL”) for collecting, using, and selling their children’s data on IXL’s website and software in school. IXL filed a motion to compel arbitration, claiming that the school districts, acting as agents for the parents in the use of IXL’s educational services under the Children’s Online Privacy Protection Act (“COPPA”), agreed to the IXL’s full terms of service, including an arbitration provision. The FTC, in its amicus brief, clarified that nothing in COPPA or its implementing regulations dictate that parents and children should be bound by every part of the terms of service agreement between a company like IXL and a school district, nor does COPPA support a claim that parents should be bound to arbitration in this case.

VPPA Case Settled for $7.25M
Patreon, Inc. (“Patreon”) has agreed to pay $7.25 million, including roughly $2.1 million in attorneys’ fees, to resolve the May 2022 class action, Stark et al. v. Patreon, Inc., brought by its users claiming that the online video platform unlawfully disclosed users’ video viewing habits to third-party analytics and marketing companies without users’ consent, in direct violation of the federal Video Privacy Protection Act (“VPPA”). Class members entitled to submit a claim for a portion of the settlement fund include Patreon users who accessed video content from the platform after April 1, 2016, and also had a Facebook account during that time. Prior to agreeing to settle, Patreon argued that applying the 1988 law in the online platform context is unconstitutional under the First Amendment. However, this settlement, once approved, adds to a growing trend of recent class-action settlements brought under the VPPA.


U.S. ENFORCEMENT

Texas Attorney General Takes Action against Auto Manufacturer for Data Collection and Sharing Practices
The Texas Attorney General filed a lawsuit against General Motors (“GM”) and its OnStar unit, accusing them of unlawfully collecting and selling drivers’ private data without consent. The lawsuit alleges that GM violated the Texas Deceptive Trade Practices Act by engaging in false and misleading practices. This action follows the Attorney General’s June 2024 announcement of an investigation into several car manufacturers for similar data privacy concerns. The lawsuit claims that GM collected, recorded, analyzed, and transmitted highly detailed driving data from over 1.8 million Texans using telematics systems installed in most GM vehicles since the 2015 model year. The lawsuit alleges this data was then sold to third parties, including insurance companies, without drivers’ knowledge or consent. GM allegedly misled consumers by implying that the data collection was solely for improving vehicle safety and functionality, while in reality, it was used to generate “driving scores” that impacted insurance premiums and coverage decisions.

Department of Justice Sues TikTok for Violations of Children’s Privacy Law
The U.S. Department of Justice filed a complaint in the Central District of California against TikTok, its parent company, and several TikTok affiliates. The complaint alleges that the TikTok entities violated the Children’s Online Privacy Protection Act (“COPPA”) by knowingly allowing minors to create TikTok accounts and unlawfully gathering personal information from them. According to the complaint, TikTok has knowingly allowed millions of children under 13 to create accounts and make, view, and share content with adults on the platform. The predecessor to TikTok, Musical.ly, had previously entered into a deal with the FTC for allowing children to use the platform and agreeing to take corrective measures to protect users going forward. The complaint also alleges that TikTok collected and retained personal information, such as email addresses, and website activity data without notifying parents or obtaining their consent.

DOJ Files Complaint against Defense Contractor for Cybersecurity Violations
The U.S. Department of Justice (“DOJ”), along with the U.S. Attorney’s Office for the Northern District of Georgia, filed a complaint-in-intervention against the Georgia Tech Research Corporation and the Georgia Institute of Technology (collectively, “Georgia Tech”), alleging violations of the False Claims Act (“FCA”) and the cybersecurity requirements in connection with the performance of its Department of Defense (“DoD”) contracts. The case is based on a whistleblower complaint that was filed by two Georgia Tech employees in July 2022. Although no data breach occurred, the DOJ alleges in its complaint that Georgia Tech failed to implement contractual cybersecurity controls required by DFARS 252.204-7012 under contracts with the U.S. Air Force and the Defense Advanced Research Projects Agency. Specifically, the DOJ alleges that Georgia Tech failed to comply with NIST SP 800-171’s requirements to: (i) implement and maintain a comprehensive System Security Plan (“SSP”); (ii) install, update, and run antivirus software; and (iii) post an accurate NIST self-assessment score.

SEC and CFTC Issues Fines for Failure to Preserve Electronic Communications
The U.S. Securities and Exchange Commission (“SEC”) and the Commodity Futures Trading Commission (“CFTC”) settled enforcement actions against 26 broker-dealers and investment advisers for failures to maintain and preserve electronic communications. The firms admitted that, during the relevant periods, their personnel sent and received off-channel communications via text that were records required to be maintained under the securities laws and CFTC recordkeeping requirements. The failures involved personnel at multiple levels of authority, including supervisors and senior managers. The firms were each charged with violating certain recordkeeping provisions of the Securities Exchange Act, the Investment Advisers Act, or the CFTC recordkeeping requirements, and for failing to reasonably supervise their personnel to prevent and detect those violations. The settlements by both the SEC and the CFTC resulted in over $474 million in fines. The settlements are part of an ongoing wave of regulatory crackdowns on firms’ failures to keep records of text messages, WhatsApp communications, and other off-channel communications. 

CFIUS Fines Telecommunications Provider for Unauthorized Data Access Incidents and Failure to Report Security Incidents
The Committee on Foreign Investment in the United States (“CFIUS”) resolved an enforcement action against T-Mobile U.S., Inc. (“T-Mobile”) resulting in a $60 million penalty. T-Mobile had entered into a National Security Agreement (“NSA”) with CFIUS in 2018 in connection with T-Mobile’s merger with Sprint and the foreign ownership of the resulting entity. CFIUS determined that between August 2020 and June 2021, in violation of a material provision of the NSA, T-Mobile failed to take appropriate measures to prevent unauthorized access to certain sensitive data and failed to report some incidents of unauthorized access promptly to CFIUS, delaying the Committee’s efforts to investigate and mitigate any potential harm. CFIUS concluded that these violations resulted in harm to the national security equities of the United States. According to CFIUS, T-Mobile has worked with CFIUS to enhance its compliance posture and obligations and has committed to working cooperatively with the U.S. government to ensure compliance with its obligations going forward.

New York, New Jersey, and Connecticut Attorneys General Announce Settlement with Biotech Company Relating to Failure to Protect Health Data
Enzo Biochem, Inc. (“Enzo”) reached a $4.5 million settlement with the attorneys general of New York, New Jersey, and Connecticut. This settlement follows a data breach in April 2023 that exposed the personal information of over 2.4 million individuals. The breach involved cyber attackers exploiting outdated and shared administrator login credentials to infiltrate Enzo’s network, exfiltrate 1.4 terabytes of unencrypted patient data, and deploy ransomware. The investigation found that Enzo’s cybersecurity measures were inadequate, including failing to use multi-factor authentication and sufficient monitoring systems, which allowed the breach to go undetected for several days. The attorneys general alleged Enzo’s security and breach response violated several federal and state data security regulations, including the HIPAA Security and Breach Notification Rules. As part of the settlement, Enzo will pay $2.8 million to New York, $930,000 to New Jersey, and $743,000 to Connecticut. Additionally, Enzo is required to implement comprehensive cybersecurity enhancements, such as securing data storage, installing an endpoint detection system, enforcing multi-factor authentication, and conducting regular risk assessments. The company must also undergo a third-party security assessment within 180 days and provide identity theft protection services to those affected by the breach. 

SEC Fines Equity Transfer Agent Company for Security Failures
The SEC announced a settlement with Equiniti Trust Company LLC (“Equiniti”). The SEC alleged the company failed to protect client securities and funds, leading to a loss exceeding $6.6 million from cyber incidents in 2022 and 2023. Equiniti agreed to an $850,000 civil penalty. In September 2022, a cyber attacker hijacked an email conversation between Equiniti and a U.S.-based public issuer client, impersonating an employee to direct the issuance and liquidation of millions of new shares, with the proceeds sent to a foreign bank account. This led to a loss of around $4.78 million, with about $1 million later recovered. In April 2023, a different cyber attacker exploited stolen Social Security numbers to set up fake accounts connected to legitimate client accounts. This enabled the attacker to liquidate securities and transfer about $1.9 million to external bank accounts, with roughly $1.6 million subsequently recovered. The SEC determined that Equiniti’s security lapses breached Section 17A(d) of the Securities Exchange Act of 1934 and Rule 17Ad-12. In addition to the civil penalty, Equiniti accepted a cease-and-desist order and censure. Monique Winkler, director of the SEC’s San Francisco Regional Office, stressed the importance of transfer agents adopting robust safeguards to counter sophisticated cyber threats.

FCC Fines Telecom Company for Failure to Properly Validate Caller ID Information
The FCC imposed a $1 million fine on Lingo Telecom, LLC (“Lingo”) for not correctly verifying caller ID information, resulting in AI-generated robocalls that imitated President Joe Biden’s voice. These calls targeted New Hampshire voters before the Democratic primary, misleadingly claiming that participating in the primary would bar them from voting in the general election. Political consultant Steve Kramer, who orchestrated the calls, is facing a separate $6 million fine and several charges, such as voter suppression and candidate impersonation. The FCC’s investigation found that Lingo Telecom breached the Truth in Caller ID Act by sending false caller ID information with deceptive intent. The company also violated the FCC’s STIR/SHAKEN framework, which requires carriers to verify caller IDs to prevent spoofing. As part of the settlement, Lingo must adopt a thorough compliance plan, which includes verifying the accuracy of information from customers and upstream providers and following stringent caller ID authentication rules.


BIOMETRIC PRIVACY

Court Approves Settlement for White Castle’s Violation of Illinois Biometric Privacy Law
U.S. District Judge John Tharp, Jr. approved a $9.39 million settlement between White Castle and a group of employees claiming the fast-food chain violated Illinois’ Biometric Information Privacy Act (“BIPA”). The claim alleged that White Castle did not provide written consent to its employees after collecting fingerprint data nor told employees how or whether it used, stored, or disposed of. The case later reached the Illinois Supreme Court on the issue of whether BIPA claims arise from each unlawful data collection or dissemination or just the first instance. The Illinois Supreme Court ruled that the BIPA claims arise with each scan. As part of the settlement, Judge Tharp also approved a $7,500 incentive award for the lead plaintiff in the case and more than $3.5 million in counsel fees.

Illinois Biometric Information Privacy Act Amendments Signed into Law
Illinois Governor Pritzker signed into law Senate Bill 2979 (“SB2979”), amending Illinois’ BIPA to significantly reduce potential liability for companies by limiting claims for statutory damages against companies that collect or disclose the same biometric identifier from the same person using the same collection method in violation of the law to just one recovery. The amendment directly overturns the 2023 ruling on claims accruals in Cothron v. White Castle Systems, Inc., in which the Illinois Supreme Court determined that BIPA claims, including statutory damages liability, accrued separately for each violating scan or transmission. SB2979 also expands the definition of “written release” from informed written consent to expressly include electronic signatures, meaning any “electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record.”


INTERNATIONAL LAWS & REGULATION

Dutch Data Protection Authority Imposes 290 Million Euro Fine for Data Transfer Violations
The Dutch Data Protection Authority (“DPA”) imposed a fine of 290 million euros on Uber for a violation of the EU General Data Protection Regulation (“GDPR”). The Dutch DPA found in its investigation that, over the course of two years, Uber transferred the sensitive personal data of European taxi drivers, such as account details, taxi licenses, location data, photos, payment details, identity documents, and in some cases even criminal and medical data, to the United States without the use of appropriate cross-border transfer mechanisms as required by the GDPR. The Dutch DPA noted that the EU-U.S. Privacy Shield was invalidated in 2020 and thus, Uber should have had other cross-border transfer mechanisms in place, such as the Standard Contractual Clauses (“SCCs”). However, according to the Dutch DPA, Uber stopped using the SCCs in August 2021 and the data of European drivers were insufficiently protected.

Irish Data Protection Commission Launches Proceedings over Social Media Platform Use of User Data for AI Systems
The Irish Data Protection Commission (“DPC”) has launched proceedings against Twitter International Unlimited Company (“Twitter”), the parent of the “X” platform, over concerns about how the personal data of millions of European users is being processed. The DPC’s concerns focus on Twitter’s use of AI systems, including its enhanced search tool known as “Grok.” While Twitter has implemented some mitigation measures, such as an opt-out mechanism, the DPC alleges that X’s users continue to have personal information processed without mitigation measures consistent with the rights under the GDPR. In its action, the DPC requests a court suspend, restrict, or prohibit Twitter from processing the personal data of X users for the purposes of developing, training, or refining any machine learning, large language, or other AI systems.

ICO Fines Software Provider for Failure to Implement Security Measures
The UK Information Commissioner’s Office (“ICO”) announced its provisional decision to fine Advanced Computer Software Group Ltd (“Advanced”) £6.09m, following an initial finding that the provider failed to implement measures to protect the personal information of 82,946 people, including some sensitive personal information. Advanced operates as a processor for organizations, including the UK National Health Service and other healthcare providers. The decision relates to a ransomware incident where hackers accessed a number of Advanced systems via customer accounts that did not have multi-factor authentication. Hackers exfiltrated data, including phone numbers and medical records, as well as details of how to gain entry to the homes of 890 people who were receiving care at home. The ICO emphasized that processors have their own obligations, independent of the controllers for whom they process data, to implement appropriate, technical, and organizational measures to ensure personal information is kept secure. This includes taking steps to assess and mitigate risks, such as regularly checking for vulnerabilities, implementing multi-factor authentication, and keeping systems up to date with the latest security patches.

ICO Launches Privacy Notice Generator
The U.K.’s data protection authority, the Information Commissioner’s Office (“ICO”), has launched a privacy notice generator on its website. The tool offers two different types of privacy notice – one for customer and supplier information, which organizations can display on their website or external communications, and another for staff and volunteer information, for inclusion in welcome packs, policy libraries, or other internal channels accessible to staff and volunteers. Intended to assist small organizations and sole traders create a privacy notice that is compliant with the UK GDPR, the privacy notice generator takes users through a series of questions regarding the types of data the organization collects, the sources of the data, the purposes for collection, with whom such data is disclosed, the legal bases for collecting and processing the data, and how the data may be transferred outside the United Kingdom.

Swiss-U.S. Data Privacy Framework Approved
On August 14, 2024, the Swiss Federal Council approved amendments to the Data Protection Ordinance officially adding the United States to the list of countries deemed as providing adequate data protection for personal data. Swiss law, much like that of the European Union and United Kingdom, restricts the transfer of personal data to countries unless they have received an official adequacy determination. The certification of the U.S.–Swiss Data Privacy Framework follows the passage of the EU-U.S. Data Privacy Framework in 2023, which replaced the prior Privacy Shield framework. Switzerland’s authorization of this framework is designed to ensure “a level playing field for private individuals and businesses in Switzerland.”

Daniel R. Saeedi, Rachel L. Schaller, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Gabrielle N. Ganze, Jason C. Hirsch, Tianmei Ann Huang, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin contributed to this article

You May Also Like

More From Author