How to approach the FTC’s advice on AI today



The Federal Trade Commission (FTC) recently released a blog titled “Striving for Truth, Justice and Fairness in Your Business’s Use of AI” this should serve as a shot at the large number of companies regulated by the FTC.

Signing a stronger regulatory stance on deployed algorithms, the FTC highlights some of the issues with AI bias and unfair treatment and states that existing FTC regulations – such as the Fair Credit Reporting Act, the Equal Credit Opportunity Act and FTC ACT – all still apply and will be enforced with algorithmic decision making.

The FTC has presented a set of principles and recommendations for well-controlled algorithm deployments and this document serves as a pragmatic checklist that companies should consider when setting up their AI governance to deploy models. machine learning to comply with FTC regulations.

Start with the right foundation

The FTC advocates a top-down approach of good data, solid processes, basic analytics, and simple models before worrying about higher-order modeling. In many presentations that I do I use the graphic below of Monica rogati, who created The AI ​​hierarchy of needs to make essentially the same point.

Without building machine learning / artificial intelligence (ML / AI) in the right order, businesses are much more prone to issues of bias and opacity due to lack of data quality, data management practices established models and a mature model deployment infrastructure.

Today, most of the ML / AI infrastructure is closely focused on the pre-deployment phases and technical audiences. The ideal infrastructure must also support production environments and non-technical users to address the other concerns highlighted by the FTC, from discriminatory results to independent validation.

Beware of discriminatory results

Typically, companies fall into the trap of unknowingly building their models using biased data. By using common or internal datasets, with historical human biases built in, they run the risk of building models that are not representative of the population they are attempting to model.

However, companies also need to take a full lifecycle approach to tackle bias because it’s not just a data issue. They should also pursue periodic independent validations to determine whether the models are working as intended and are not showing disparate impacts on different and disadvantaged populations.

Embrace transparency and independence

The profession of auditor has existed in one form or another since the Middle Ages, and after the SEC Bill of 1932, independent auditing became a mandatory annual ritual of all publicly traded companies. Nowadays, no one would dream of investing in a large company without independent validation.

In the world of algorithmic decision-making, companies have heretofore avoided independent validation and certain elements of transparency often as a means of protecting their intellectual property, but also due to a lack of knowledge and technologies to render them. audits and validations possible. However, for models making substantial decisions about individuals, this type of “algorithmic commercialism” undermines public confidence in the models. For algorithms that affect end users, independent validation is emerging as an important component of a well-assured modeling program.

With full transparency, we founded Monitaur with the vision to do just that: independent assurance for models. We agree with the FTC that the use of objective standards and periodic validation by independent insurance providers ensures reliable algorithmic decision-making. The danger of not having an objective external review of models is that regulatory impulses will go much further than model builders would like.

For example, the FTC suggests that companies should open their data and source code to outside inspection. Very few organizations would take the risk for their intellectual property. More likely, such demand would stifle innovation that could effectively benefit consumers that regulators want to protect.

Don’t overdo what your algorithm can do …

The guide emphasizes that under FTC law, statements to customers must be truthful and supported by evidence. The hyper-competitive tech space and corporate pressures to signal AI innovation have caused many companies to exaggerate their capabilities, sometimes calling systems “AI” when instead there are more basic rule-based systems with a high degree of integrated human process. With the FTC’s comments, companies should consider that AI overestimates could in fact trigger unwanted regulatory scrutiny.

In addition to false claims about AI capabilities, the FTC specifically raises the danger of exaggerating “if [AI] can provide fair or unbiased results. “Automated decision systems are fallible, and companies must take special care to independently validate decisions and thoroughly test their models periodically to ensure that biases don’t start to creep in. Unfortunately, the technical discussion on Bias in AI is obsessed with this as a data-only issue, and this perspective overlooks other issues that also cause bias.

Tell the truth about how you use data

In the current mindset of “data is the new oil,” many companies don’t want consumers to use their data for model training. Beyond unethical business practice and the risk of legal action, this will cause regulatory difficulties in the future for a company under the umbrella of the FTC.

The trend seems to be changing a bit with the GDPR, ACPL, US states and Congress passing and considering laws that prohibit this type of data use without permission or disclosure. In the tech space, the tide has also started to turn with developments such as Apple’s iOS 14.5, which allows users to accept data tracking and mission-driven technology foundations establishing principles and best practices. . Businesses would be wise to create or strengthen data governance programs in anticipation of the proliferation of oversight bodies that would investigate how they use customer data and inform them of that use.

Do more good than harm and hold yourself accountable

We fundamentally believe that AI can positively improve our daily lives and, in many ways, create more fairness and justice through transparent data and systems; however, without clear governance and responsible intentions, models run the risk of being unfair and causing more harm than good. Essentially, if a model makes a process unfair or less fair than existing practices for a protected class, the FTC will deem those models to be unfair. Again, this position places the burden of proof on the company that builds with ML / AI to show that they have compared new AI / ML products with previous offerings and documented the effectiveness for the consumer prior to launch. .

The bottom line is, the FTC says to “hold yourself accountable – or be prepared for the FTC to do it for you.” Accountability should be the primary driver of how companies insure their systems, which is why accountability for the performance of your algorithms is a central control in the Business Understanding category of. our ML Assurance framework.

Image credit: Laurent T / Shutterstock

Andrew Clark is CTO and co-founder of Monitaur



Source link

Previous Brighton: the summer transfer window and what to do | Football News
Next Has Xi's multilateral moment finally arrived?

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *