Alston & Bird Consumer Finance Blog

data breach

Ginnie Mae Imposes Cybersecurity Incident Notification Obligation

What Happened?

On March 4, 2024, Ginnie Mae issued All Participant Memorandum (APM) 24-02 to impose a new cybersecurity incident notification requirement. Ginnie Mae has also amended its Mortgage-Backed Securities Guide to reflect this new requirement.

Effective immediately, all Issuers, including subservicers, of Ginnie Mae Mortgage-Backed Securities (Issuers) are required to notify Ginnie Mae within 48 hours of detection that a “Significant Cybersecurity Incident” may have occurred.

Issuers must provide email notification to Ginnie Mae with the following information:

  • the date/time of the incident,
  • a summary of in the incident based on what is known at the time of notification, and
  • designated point(s) of contact who will be responsible for coordinating any follow-up activities on behalf of the notifying party.

For purposes of this reporting obligation, a “Significant Cybersecurity Incident” is “an event that actually or potentially jeopardizes, without lawful authority, the confidentiality, integrity of information or an information system; or constitutes a violation of imminent threat of violation of security policies, security procedures, or acceptable use policies or has the potential to directly or indirectly impact the issuer’s ability to meet its obligations under the terms of the Guaranty Agreement.”

Once Ginnie Mae receives notification, it may contact the designated point of contact to obtain further information and establish the appropriate level of engagement needed, depending on the scope and nature of the incident.

Ginnie Mae also previewed that it is reviewing its information security requirements with the intent of further refining its information security, business continuity and reporting requirements.

Why Is It Important?

Under the Ginnie Mae Guarantee Agreement, Issuers are required to furnish reports or information as requested by Ginnie Mae.  Any failure of the Issuer to comply with the terms of the Guaranty Agreement constitutes an event of default if it has not been corrected to Ginnie Mae’s satisfaction within 30 days.  Moreover, Ginnie Mae reserves the right to declare immediate default if an Issuer receives three or more notices for failure to comply with the Guarantee Agreement.  It is worth noting that an immediate default also occurs if certain acts or conditions occur, including the “submission of false reports, statements or data or any act of dishonestly or breach of fiduciary duty to Ginnie Mae related to the MBS program.”

Ginnie Mae’s notification requirement adds to the list of data breach notification obligations with which mortgage servicers must comply. For example, according to the Federal Trade Commission, all states, the District of Columbia, Puerto Rico, and the Virgin Islands have enacted legislation requiring notification of security breaches involving personal information. In addition, depending on the types of information involved in the breach, there may be other laws or regulations that apply. For example, with respect to mortgage servicing, both Fannie Mae and Freddie Mac impose notification obligations similar to that of Ginnie Mae.

What Do I Need to Do?

If you are an Issuer and facing a cybersecurity incident, please take note of this reporting obligation. For Issuers who have not yet faced a cybersecurity incident, now is the time to ensure you are prepared as your company could become the next victim of a cybersecurity incident given the rise in cybersecurity attacks against financial services companies.

As regulated entities, mortgage companies must ensure compliance with all the applicable reporting obligations, and the list is growing.  Our Cybersecurity & Risk Management Team can assist.

FTC Approves New Data Breach Notification Requirement for Non-Banking Financial Institutions

On October 27, 2023, the FTC approved an amendment to the Safeguards Rule (the “Amendment”) requiring that non-banking financial institutions notify the FTC in the event of a defined “Notification Event” where customer information of 500 or more individuals was subject to unauthorized acquisition.  The Amendment becomes effective 180 days after publication in the Federal Register.  Importantly, the amendment requires notification only to the Commission – which will post the information publicly – and not to the potentially impacted individuals.

Financial institutions subject to the Safeguards Rule are those not otherwise subject to enforcement by another financial regulator under Section 505 of the Gramm-Leach-Bliley Act, 15 U.S.C. 6805 (“GLBA”). The Safeguards Rule within the FTC’s jurisdiction include mortgage brokers, “payday” lenders, auto dealers, non-bank lenders, credit counselors and other financial advisors and collection agencies, among others.  The FTC made clear that one primary reason for adopting these new breach notification requirements is so the FTC could monitor emerging data security threats affecting non-banking financial institutions and facilitate prompt investigations following major security breaches – yet another clear indication the FTC intends to continue focusing on cybersecurity and breach notification procedures.

Notification to the FTC

Under the Amendment, notification to the FTC is required upon a “Notification Event,” which is defined as the acquisition of unencrypted customer information without authorization that involves at least 500 consumers. As a new twist, the Amendment specifies that unauthorized acquisition will be presumed to include unauthorized access to unencrypted customer information, unless the financial institution has evidence that the unauthorized party only accessed, but did not acquire the information.  The presumption of unauthorized acquisition based on unauthorized access is consistent with the FTC’s Health Breach Notification Rule and HIPAA, but not state data breach notification laws or the GLBA’s Interagency Guidelines Establishing Information Security Standards (“Interagency Guidelines”).

As mentioned above, individual notification requirements for non-banking financial institutions will continue to be governed by state data breach notification statutes and are not otherwise included in the Amendment. The inclusion of a federal regulatory notification requirement and not an individual notification requirement in the Amendment is a key departure from other federal financial regulators, as articulated in the Interagency Guidelines which applies to banking financial institutions, and the SEC’s proposed rules that would require individual and regulatory reporting by registered investment advisers and broker-dealers.

Expansive Definition of Triggering Customer Information

Again departing from pre-existing notification triggers of “sensitive customer information” in the Interagency Guidelines or “personal information” under state data breach reporting laws, the FTC’s rule requires notification to the Commission if “customer information” is subject to unauthorized acquisition. “Customer information” is defined as “non-public personal information,” (see 16 C.F.R. 314.2(d)) which is further defined to be “personally identifiable financial information” (see 16 C.F.R. 314.2(n)).

Under the FTC’s rule, “personally identifiable financial information” is broadly defined to be (i) information provided by a consumer to obtain a service or product from the reporting entity; (ii) information obtained about a consumer resulting from any transaction involving a financial product or service from the non-banking financial institution; or (iii) information the non-banking financial institution obtains about a consumer in connection with providing a financial product or service to the consumer. Unlike the Interagency Guidelines which defines “sensitive customer information” as a specific subset of data elements (“customer’s name, address, or telephone number, in conjunction with the customer’s social security number, driver’s license number, account number, credit or debit card number, or a personal identification number or password that would permit access to the customer’s account”) (see 12 CFR Appendix F to Part 225 (III)(A)(1)), the FTC’s definition of “personally identifiable financial information” is much broader.

For example, “personally identifiable financial information” could include information a consumer provides on a loan or credit card application, account balance information, overdraft history, the fact that an individual has been one of your customers, and any information collected through a cookie. As a result of this broad definition, notification obligations may be triggered for a wider variety of data events, as compared to data breach notifications for banking financial institutions under the Interagency Guidelines or state data breach notification laws. As a result, non-banking financial institutions should consider reviewing and revising their incident response procedures so that they can be prepared to conduct a separate analysis of FTC notification requirements under the Amendment, as distinct from state law notification requirements.

No Risk of Harm Provision

Although the FTC considered whether to include a “risk of harm” standard for notifying the Commission, it ultimately decided against including one to avoid any ambiguity or the potential for non-banking financial institutions to underestimate the likelihood of misuse. However, numerous state data breach reporting statutes contain risk of harm provisions that excuse notice to individuals and/or state regulators where the unauthorized acquisition and/or access of personal information is unlikely to cause substantial harm (such as fraud or identify theft) to the individual.  This divergence between FTC notifications and state law has set the stage for the possibility that a reporting non-banking financial institution could be required to report to the FTC, but not to potentially affected individuals and/or state attorneys general pursuant to state law.

Timing and Content for Notice to FTC

Non-banking financial institutions must notify the Commission as soon as possible, and no later than 30 days after discovery of the Notification Event. Discovery of the event is deemed to be the “first day on which such event is known…to any person other than the person committing the breach, who is [the reporting entity’s] employee, officer, or other agent.” The FTC’s timeline is similar to the timeline dictated for notifying state Attorney Generals under most state data breach notification laws (either explicitly or implicitly), but a key difference from the Interagency Guidelines, which requires notification to the bank’s primary federal regulator as soon as possible.

The notification must be submitted electronically on a form located on the FTC’s website (https://www.ftc.gov), and include the following information, which will be available to the public: (i) the name and contact information of the reporting financial institution, (ii) a description of the types of information involved in the Notification Event, (iii) the date or date range of the Notification Event (if available), (iv) the number of consumers affected or potentially affected; (v) a general description of the Notification Event; and (vi) whether law enforcement official (including the official’s contact information) has provided a written determination that notifying the pu of the breach would impede a criminal investigation or cause damage to national security.  Making this type of information regarding a data security incident available to the public is not part of any current U.S. regulatory notification structure.

Law Enforcement Delays Public Disclosure by FTC, Not FTC Reporting

A law enforcement delay may preclude public posting of the Notification Event by the FTC for up to 30 days but does not excuse timely notification to the FTC.  A law enforcement official may seek another 60 days’ extension, which the Commission may grant if it determines that public disclosure of the Notification Event “continues to impede a criminal investigation or cause damage to national security.”

NY DFS Releases Revised Proposed Second Amendment of its Cybersecurity Regulation

The New York Department of Financial Services (“NY DFS”) published an updated proposed Second Amendment to its Cybersecurity Regulation (23 NYCRR Part 500) in the New York State Register on June 28, 2023, updating its previous proposed Second Amendment, which was published November 9, 2022. While the language proposed is largely similar to the previous draft, which we previously summarized, NY DFS incorporated a number of changes as a result of the 60-day comment period.

Below we outline some of the key revisions to the proposed Second Amendment of NY DFS’s Cybersecurity Regulation compared to the previously issued version from November 9, 2022:

  • Risk Assessment (§§ 500.01 & 500.09). NY DFS previously proposed (in the November 2022 draft) to revise the definition of “Risk Assessment,” which NY DFS has repeatedly emphasized is a core and gating requirement for compliance with the Cybersecurity Regulation, permitting covered entities to “take into account the specific circumstances of the covered entity, including but not limited to its size, staffing, governance, businesses, services, products, operations, customers, counterparties, service providers, vendors, other relations and their locations, as well as the geographies and locations of its operations and business relations.” By contrast, the newly proposed definition more formally defines the components of and inputs to the risk assessment: “Risk assessment means the process of identifying, estimating and prioritizing cybersecurity risks to organizational operations (including mission, functions, image and reputation), organizational assets, individuals, customers, consumers, other organizations and critical infrastructure resulting from the operation of an information system. Risk assessments incorporate threat and vulnerability analyses, and consider mitigations provided by security controls planned or in place.” The revised definition omits the explicit reference to tailoring and customization currently found in § 500.09.  The removal of this language and codification of the risk assessment’s general parameters suggests that although risk assessments can and should be customized to some extent, NY DFS may expect risk assessments to address a more standard set of components that as a general framework is not open to customization.
    • In addition, NY DFS removed the requirement that Class A companies (which are generally large entities with at least $20M in gross annual revenue in each of the last two fiscal years from business operations in New York, and over 2,000 employees, on average over the last two years, or over or over $1B in gross annual revenue in each of the last two fiscal years from all business operations) use external experts to conduct a risk assessment once every three years.
  • Multi-factor Authentication (“MFA”) (§ 500.12). NY DFS continues to stress the importance of MFA in the newly revised draft of the proposed Second Amendment by broadening the requirement (relative to the current MFA requirements and proposed draft from November 2022) and bringing it in alignment with the FTC’s amended Safeguards Rule. In the revised language, MFA is explicitly required to “be utilized for any individual accessing any of the covered entity’s information systems,” (with limited exceptions, outlined below); NY DFS removed from § 500.12(a), (1) the pre-requisite that MFA be implemented based on the covered entity’s risk assessment, and (2) the option of implementing other effective controls, such as risk-based authentication. By doing so, NY DFS appears to strongly recommend MFA implementation across the board, despite retaining the limited exception if the CISO approves in writing a reasonably equivalent or more secure compensating controls (and such controls must be reviewed periodically, and at least annually).
    • For covered entities that fall under the limited exemption set forth in § 500.19(a), which are generally smaller covered entities (based on number of employees and/or annual revenue), MFA must at least be utilized for (1) remote access to the covered entity’s information systems, (2) remote access to third-party applications that are cloud-based, from which nonpublic information is accessible, and (3) all privileged accounts other than service accounts that prohibit interactive logins. As with all other covered entities, the CISO may approve, in writing, reasonably equivalent or more secure compensating controls, but such controls must be reviewed periodically, and at least annually.
  • Incident Response Plan (“IRP”) and Business Continuity and Disaster Recovery Plan (“BCDR”) (§ 500.16). NY DFS added an additional requirement that a covered entity’s IRP include requirements to address the root cause analysis of a cybersecurity event, describing how the cybersecurity event occurred, the business impact from the cybersecurity event, and remediation steps to prevent reoccurrence. NY DFS clarified that the IRP and BCDR must be tested at least annually, and must include the ability to restore the covered entities “critical data” and information systems from backup (but NY DFS does not define “critical data”). As noted in our previous summary, the concept of BCDR is new as of the Second Amendment and not currently in effect in the existing regulation.
  • Annual Certification of Compliance (§ 500.17(b)). NY DFS maintains its current requirement of an annual certification of compliance by a covered entity, but has adjusted the standard for certification from “in compliance” to a certification that the covered entity “materially complied” with the Cybersecurity Regulation during the prior calendar year.  Although NY DFS does not define material compliance, this revision should provide some flexibility for covered entities to complete the certification.  Going forward, covered entities would be presented with two options: (i) submit a written certification that it “materially complied” with the regulation (§ 500.17(b)(1)(i)(a)); or (ii) a written acknowledgment that it did not “fully comply” with the regulation (§ 500.17(b)(1)(ii)(a)), while also identifying “all sections…that the entity has not materially complied with” (§ 500.17(b)(1)(ii)(b)).  It is unclear how NY DFS intends for covered entities to parse the distinction between material compliance and a lack of full compliance, but the requirement for the covered entity to list each section with which it was not in material compliance suggests that it may expect a section-by-section analysis of material compliance for purposes of completing the certification process.
  • Penalties (§ 500.20). Interestingly, NY DFS added that it would take into consideration the extent to which the covered entity’s relevant policies and procedures are consistent with nationally-recognized cybersecurity frameworks, such as NIST, in assessing the appropriate penalty for non-compliance with the Cybersecurity Regulation.  DFS maintains its proposed amendment that a “violation” is: (1) the failure to secure or prevent unauthorized access to an individual’s or entity’s NPI due to non-compliance or (2) the “material failure to comply for any 24-hour period” with any section of the regulation.

The revised proposed Second Amendment are subject to a 45-day comment period, ending August 14, 2023.

SHIELD Act Overhauls New York’s Data Breach Notification Framework

On October 23, 2019, New York’s new breach notification provisions came into effect, a result of New York’s passage of the Stop Hacks and Improve Electronic Data Security Act (SHIELD Act) in July. That Act overhauled New York’s data privacy framework, expanding the list of data elements that are considered “private information” while growing the types of incidents and covered entities that may trigger New York’s notification requirement. The SHIELD Act also imposes a new legal obligation for owners and licensors of private data to comply with the Act’s “reasonable security requirement.” Some regulated businesses, like those in the healthcare and financial industries, will be deemed compliant with the SHIELD Act’s reasonable security requirement if they already comply with laws like HIPAA or the GLBA. In an attempt to mitigate its potential burdens on smaller operations, the SHIELD Act explicitly defines small businesses, for whom the Act’s “reasonable security requirement” will be assessed with regard to factors like a business’s “size and complexity.”

The SHIELD Act’s breach notification provisions went into effect on October 23, 2019, while the new data security requirement goes into effect on March 21, 2020.

The Act’s main provisions are described below.

Expanding the Types of Incidents and Entities Covered Under Breach Notification:

The SHIELD Act expands the pool of incidents which trigger mandatory notification to data subjects.  Prior to the SHIELD Act, New York required individual notifications only when certain private information was acquired by an unauthorized individual. Under the SHIELD Act, New York now requires individual notifications where such information is either accessed OR acquired. In deciding whether such information has been unlawfully accessed under the statute, the Act directs businesses to consider whether there exist any “indications that the information was viewed, communicated with, used, or altered by a person without valid authorization or by an unauthorized person.”  So now under the SHIELD Act, if an unauthorized entity merely views information and does not download or copy it, New York requires individual notifications.

The SHIELD Act also expands which entities may be required to make disclosures under New York’s notification requirement. Previously, New York required notifications only from those entities which conducted business in New York and owned or licensed the PI of New York residents.  Under the SHIELD Act, New York’s notification requirement applies more broadly to any business which owns or licenses the private information of New York residents, regardless of whether it conducts business in state.
Expanding the Definition of Private Information

Not only does the SHIELD Act expand the types of breaches which may trigger notifications, it further expands New York’s definition of private information (“PI”) by incorporating biometric data and broadening the circumstances in which financial data is considered PI.  The Act defines biometric data as that which is “generated by electronic measurements of an individual’s unique physical characteristics,” such as fingerprints, voice prints, and retina or iris images.  And while account numbers and credit/debit card numbers were previously only considered PI in combination with security codes and passwords that permitted access to financial accounts, now under the SHIELD Act, such information is considered PI under any circumstances where it could be exploited to gain access to an individual’s financial accounts, even when security codes and passwords remain secure.

Under the SHIELD Act, New York now joins those states that protect online account usernames and e-mail addresses when stored in combination with passwords or security questions that could provide access to online accounts.  The Act does not require usernames and e-mail addresses to be paired with other personal information, beyond that needed to access an online account, to constitute PI.

Clarification of Substitute Notice by E-mail:

Prior to the passage of the SHIELD Act, New York more broadly permitted notification by e-mail when the notifying business had access to the e-mail addresses of all affected data subjects. The SHIELD Act, however, creates a new exception where notice by e-mail is no longer permissible when the breached information includes the data subject’s e-mail address in combination with a password or security question and answer.  This provision appears aimed at preventing businesses from notifying by e-mail when the notification itself may be sent to a compromised account.

Breach Notification Content Requirements and Exemptions:

The SHIELD Act expands the required content of notifications by requiring a business to include the telephone numbers and websites of the relevant state and federal agencies responsible for providing breach response and identity theft services.

On the other hand, the Act also carves out new exceptions in the case of inadvertent disclosures or where notification may already be required under another statute. The SHIELD Act exempts businesses from New York’s breach notification requirement if information was disclosed inadvertently by persons authorized to access the information and the business reasonably determines that such exposure will not likely result in the misuse of information or other financial or emotional harm to the data subject.  Such determinations, however, must be documented in writing and maintained by the disclosing company for at least five years.  If the disclosure affects more than five hundred New York residents, a business availing itself of this exemption must provide the written determination of non-harmfulness to the New York Attorney General within ten days of making the determination.

The Act further exempts certain businesses from making additional notifications where they are already required to notify under other federal or state laws.  Under the SHIELD Act, no further notice is required if notice of a breach is made under any of the following:

1)      Title V of the Gramm-Leach-Bliley Act (GLBA)
2)      the Health Insurance Portability and Accountability Act (HIPAA) or Health Information Technology for Economic and Clinical Health Act (HITECH);
3)      New York Department of Financial Services’ Cybersecurity Requirements for Financial Services Companies (23 NYCRR 500), or;
4)      any other security rule or regulation administered by any official department, division, commission, or agency of the federal or New York state governments.

Reporting HIPAA and HITECH Breaches to the State Attorney General:

Any covered entity required to provide notification of a breach to the Secretary of Health and Human Services under HIPAA or HITECH must also notify the New York Attorney General within five business days of notifying HHS.  Thus, while the SHIELD Act exempts HIPAA and HITECH regulated companies from re-notifying affected individuals, it nevertheless requires an additional notification to the state Attorney General.

Creation of the Reasonable Security Requirement:

Effective March 21, 2020, the SHIELD ACT imposes a new “reasonable security requirement” on every covered owner or licensor of New York residents’ private information. The SHIELD Act requires businesses to develop and maintain reasonable administrative, technological, and physical safeguards to ensure the integrity of private information.

Reasonable administrative safeguards include:

(1) Designating one or more employees to coordinate security; (2) Identifying reasonably foreseeable internal and external risks; (3) Assessing the sufficiency of the safeguards in place to control identified risks; (4) Training and managing employees in the security program practices and procedures; (5) Selecting service providers capable of maintaining safeguards, and requiring those safeguards by contract; (6)Adjusting the security program to account for business changes or other new circumstances.

Reasonable technical safeguards include:

(1) Assessing in network and software design risks; (2) Assessing risks in information processing, transmission, and storage; (3) Detecting, preventing, and responding to attacks or system safeguards; (4) Regular testing and monitoring of key controls, systems, and procedures.

Reasonable physical safeguards include:

(1) Assessing the risks of information storage and disposal; (2) Detecting, preventing, and responding to intrusions; (3) Protecting against unauthorized access or use of private information during data collection, transportation, and destruction; (4) Disposing of private information within a “reasonable amount of time after it is no longer needed for business purposes by erasing electronic media so that the information cannot be read or reconstructed.”

Applying the Reasonable Security Requirement to Small Businesses:

The SHIELD Act makes special provision for small businesses, presumably to avoid overly burdening them. Under the statute, a small business is defined as any business with “(I) fewer than fifty employees; (II) less than three million dollars in gross annual revenue in each of the last three fiscal years; or (III) less than five million dollars in year-end total assets.”  While small businesses are still subject to the reasonable security requirement, their safeguards need only be “appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the personal information” the small business collects about consumers.

Implications of the SHIELD Act’s Security Requirement for Compliant Regulated Entities:

Just like businesses may be exempted from the SHIELD Act’s notification requirements if they comply with another statute, businesses may also be deemed to be in compliance with the SHIELD Act’s reasonable security requirement if they are already subject to and in compliance with the following data security requirements:

1)      Title V of the GLBA;
2)      HIPAA or HITECH;
3)      23 NYCRR 500, or;
4)      Any other security rule or regulation administered by any official department, division, commission, or agency of the federal or New York state governments.

Penalties for Noncompliance:

The SHIELD Act increases the penalties for noncompliance with New York’s notification requirements. Previously, businesses faced a fine of the greater of $5,000 or $10 dollars per instance of failed notification, so long as the latter did not exceed $150,000.  Now, penalties may grow as large as $20 per incident with a maximum limit of $250,000.

The Act also lengthens the time in which legal actions for failure to notify may commence from two years to three years. This time is measured from either the date on which the New York Attorney General became aware of the violation, or the date a business sends notice to the New York Attorney General, whichever is first. Regardless, in no case may an action be brought “after six years from the discovery of the breach by the company unless the company took steps to hide the breach.”

The SHIELD Act empowers the New York Attorney General to sue both for injunctions and civil penalties when businesses fail to comply with the Act’s reasonable security requirements. It explicitly excludes, however, any private right of action under the reasonable security requirement provisions.

The CCPA Could Reset Data Breach Litigation Risks

A&B Abstract:

While much has been written about the California Consumer Privacy Act (CCPA), the focus has primarily been on the new rights it affords California consumers to have access to and control use of their data and opt out of many transfers to third parties. While this is a sea change in data privacy legislation in the United States, perhaps the greatest risk to businesses covered by the CCPA is that the CCPA creates a private right of action – with substantial statutory damages – for data breaches. This change will likely reset litigation risks in California in the post-data-breach context and may have significant implications for data breach litigation across the country.

Overview of the CCPA Breach Provisions

The CCPA will do two significant things for the first time in the world of data breach litigation. First, it will give consumers the ability to sue businesses when their “nonencrypted or nonredacted personal information … is subject to an unauthorized access and exfiltration, theft, or disclosure as a result of the business’s violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information.” This private right of action comes into play when the statutory trigger has been met and the incident is a result of the business’s failure to implement and maintain “reasonable security procedures and practices.” This reasonable security requirement essentially codifies negligence claims found in much of today’s post-breach litigation. Second, the CCPA is the first U.S. law to provide for statutory damages in connection with data security incidents, including penalties of $100 to $750 per incident, actual damages, and injunctive relief.

There are two aspects of this portion of the CCPA that provide some hope to breached entities. The definition of personal information used for the private right of action provision of the CCPA is the narrower definition of personal information set forth in the  current California data breach notification law, Section 1798.81.5, rather than the now famously broad definition of personal information under the CCPA (information that “identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”) The statute also requires both access and exfiltration, theft, or disclosure, which is a more exacting standard than those state breach notification laws that only require unauthorized access to personal data.

Damages: Amount & Factors for Consideration by a Court

The CCPA authorizes courts to award statutory damages in such action of between $100 and $750 “per consumer per incident” or to award actual damages, whichever is greater. Id. § 1798.150(a)(1)(A). The statute directs courts to consider a number of factors in assessing the amount of statutory damages to award, “including, but not limited to, the nature and seriousness of the misconduct, the number of violations, the persistence of the misconduct, the length of time over which the misconduct occurred, the willfulness of
the defendant’s misconduct, and the defendant’s assets, liabilities, and net worth.” Id. § 1798.150(a)(2).[1] These statutory damages are substantial. Moreover, the mere existence of statutory damages will provide data breach plaintiffs with a new argument for standing (which otherwise can be problematic).

First, the statute purports to allow consumers to sue even when they have not suffered any damages as a result of the breach. This is in stark contrast to the most common data breach claims that consumers bring against victims of data breaches today. Those suits are typically based on negligence and/or breach of implied contract theories, both of which require plaintiffs to prove actual damages as an element of their claims. This risk is particularly acute in litigation brought by consumers following the theft of payment card
data, where actual damages are often lacking and are difficult to quantify since payment cards are often canceled and reissued after a data breach and financial institutions are generally required to reimburse consumers for unauthorized charges.

Plaintiffs who attempt to allege a violation of the CCPA will still be constrained – at least in federal court – by the constitutional requirement that they suffer a legally cognizable injury-in-fact in order to have standing to sue. This requirement has been difficult to satisfy for plaintiffs in data breach class actions. Moreover, because the U.S. Supreme Court has held that the mere violation of a statute alone is insufficient to confer Article III standing when it is otherwise lacking, the existence of a private-right-of-action provision in the CCPA does not automatically grant plaintiffs the right to bring a claim in federal court. Courts will ultimately need to address the intersection between the CCPA’s private-right-of-action provision and Article III standing requirements, and this will be an evolving area of the law that companies should pay close attention to over the next several years.

Second, the amount of statutory damages under the CCPA increases the potential overall exposure companies could face in data breach litigation. The statutory damages, which range from $100 to $750 per incident, can add up very quickly, particularly if a large number of records are impacted by the breach.

Third, the prospect of an award of statutory damages has significant class certification implications if the plaintiffs bring a claim for a violation of the CCPA. Defendants have argued in past data breach cases that individualized damages issues are a significant hurdle to trying the plaintiffs’ claims classwide. While the existence of individualized damages issues alone is generally not sufficient to defeat a motion for class certification, it can be part of a powerful argument that predominance is lacking. Thus, in CCPA litigation, defendants will likely have to place a greater emphasis on other defenses to class certification, including case-specific issues that predominate over issues common to the putative class.

Reasonable Security Standard

The CCPA’s private right of action allows for damages when (1) a company experienced a security incident or data breach; and (2) the company failed to maintain reasonable security practices and procedures. This begs the question of what constitutes “reasonable security.” While a detailed discussion of this topic is beyond the scope of this article, potential defendants under the statute should address this issue in their CCPA implementation programs.

In considering this issue, note that California’s former attorney general, Senator Kamala Harris, provided quite clear guidance on what she considered reasonable security. In February 2016, the attorney general’s office released the California Data Breach Report, which analyzed breaches from 2012 to 2015 and provided guidance on what businesses could consider reasonable security. The guidance focuses on the 20 controls in the Center for Internet Security’s (CIS) Critical Security Controls (previously known as the SANS Top 20).  According to Attorney General Harris, these controls “identify a minimum level of information security that all organizations that collect or maintain personal information should meet. The failure to implement all the Controls that apply to an organization’s environment constitutes a lack of reasonable security.” While Attorney General Harris’s guidance does not have the force of law, it is hard to ignore this guidance for purposes of analyzing these provisions of the CCPA.

Of course, there are a number of other third-party protocols similar to the CIS Controls that one might also assert constitute “reasonable security.” These include the National Institute of Standards and Technology Cybersecurity Framework (NIST), which is now well established and in its latest revision has over 900 individual security measures, the Control Objectives for Information and Related Technologies (COBIT) created by ISACA, and the International Organization for Standardization (ISO) ISO/IEC 27000:2018 standards, and many others.[2]

The FTC has also been active in establishing at least what does not constitute reasonable security in its eyes. There have been a number of FTC enforcement actions against companies involving security issues, including In the Matter of Accretive Health Inc., Docket No. C-4432; In the Matter of Uber Technologies Inc., Docket No. C-4662; In the Matter of DSW Inc., Docket No. C-4157; In the Matter of the TJX Companies Inc., Docket No. C-4227; In the Matter of Goal Financial LLC, Docket No. C-4216; and In the Matter of Twitter Inc., Docket No. C-4316. Of course, there has also been significant litigation in this area somewhat expanding (FTC v. Wyndham Worldwide Corporation, 799 F. 3d 236 (3d Cir. 2015)) and contracting (LABMD Inc. v. FTC, 894 F.3d 1221 (11th Cir. 2018)) the FTC’s oversight in this area.

Companies subject to existing regulatory regimes have for some time dealt with security standards such as the Health Insurance Portability and Accountability Act (HIPAA) Security Rule, 45 C.F.R. §§ 160, 164(a), 164(c), and the Gramm–Leach–Bliley (GLB) Safeguards Rule, 15 U.S.C. 6801(b), 6805(b)(2) (among others, although data subject to HIPAA and GLB is currently excepted from application of the CCPA). In the wake of the CCPA, however, companies that have not previously been subject to express regulation of their security practices should now affirmatively consider whether their security programs will allow them to comfortably assert that they have met their “reasonable security” obligation under the CCPA.

National Litigation Implications

Because it includes an express private right of action and authorizes courts to award statutory penalties, the CCPA will substantially increase litigation risk and exposure for companies that are subject to a data breach. The impact will be most strongly felt when claims are brought by (or on behalf of a class of) California residents or against a company that is organized or maintains its principal place of business in California, where the argument for the application of California law will be the strongest. See Phillips Petroleum Co. v. Shutts, 472 U.S. 797, 821 (1985) (holding that due process is violated when a court attempts to apply the law of one state with “little or no relationship” to the transaction “in order to satisfy the procedural requirement that there be a ‘common question of law’”). Nevertheless, the CCPA could have broader implications for data breach litigation nationwide.

First, it could incentivize plaintiffs to file more data breach class actions in California, though plaintiffs will be constrained in their ability to do so by the Supreme Court’s decision in Bristol-Meyers Squibb Co. v. Superior Court, 137 S. Ct. 827 (2017), which holds that state courts generally cannot exercise personal jurisdiction over an out-of-state defendant for claims brought by nonresident plaintiffs.

Second, plaintiffs’ lawyers are also likely to try to effectively expand the scope of the CCPA’s private-right-of-action provision by attempting to bring suit or violations of the CCPA under California’s Unfair Competition Law, Cal. Bus. & Prof. Code § 17200. That statute prohibits plaintiffs from engaging in “any unlawful, unfair or fraudulent business act or practice,” and allows plaintiffs to “borrow[] violations of other laws and treat[] them as unlawful practices that the unfair competition law makes independently actionable.” Cel-Tech Communications Inc. v. Los Angeles Cellular Telephone Co., 20 Cal. 4th 163, 180 (1999). Plaintiffs are likely to try to argue that any violation of the CCPA, regardless of whether it falls within the private-right-of-action provision, is actionable under the Unfair Competition Law. While this has not yet been litigated, companies will have a strong argument that plaintiffs should not be able to evade the narrow scope of the private-right-of-action provision in this manner. The CCPA’s private-right-of-action
provision expressly states that nothing in the CCPA “shall be interpreted to serve as the basis for a private right of action under any other law.” Cal. Civ. Code § 1798.150(c). By including this provision in the law, it stands to reason that the legislature expressly intended to exempt the CCPA from the reach of the Unfair Competition Law. Nevertheless, companies should carefully monitor litigation in this area, as a court ruling to the contrary could dramatically increase the litigation risk posed by the CCPA. See also
Robert D. Phillips, Jr. & Gillian H. Clow, An Update on the California Consumer Privacy Act and Its Private Right of Action, available at https://www.alston.com/en/insights/publications/2018/09/california-consumerprivacy-act.

[1] In order to bring a private right of action under the CCPA, the consumer is required to first “provide[] a business 30 days’ written notice identifying the specific provisions of this title the consumer alleges have been or are being violated.” Cal. Civ. Code § 1798.150(b).

[2] A few other states have included similar reasonableness standards in their breach notification statutes (although these statutes do not include corresponding private rights of action). For example, Indiana, I.C. Sec. 24-4.9-3-3.5 (c) (states that “a data base owner shall implement and maintain reasonable procedures, including taking any appropriate corrective action, to protect and safeguard from unlawful use or disclosure any personal information of Indiana residents collected or maintained by the data base owner.”