How can we regulate video analytics technology?
Society and Pop Culture have long been awash with doomsday predictors when it comes to Artificial Intelligence. In fact, The Terminator was released in 1984 (that’s 40 years next year, people!).
While enslavement seems a little far fetched, these narratives do prompt a more reasonable question: What type, and amount, of regulation is required to keep this rapidly evolving technology in check? And, how might these checks and balances relate to the sub-domain of video analytics technology?Â
Editor of Security Solutions Media, John Bigelow, put this question to a panel of experts at Sydney’s 2023 Security Exhibition & Conference, titled Using Artificial Intelligence to solve real-world customer challenges.
John Bigelow, Editor at Security Solutions Media
When we’re looking at Artificial Intelligence, and any sort of cutting edge technology, as technology leaders who are looking to implement these sorts of things in a new space, how do we best understand the ethical, regulatory and legal requirements?
How to regulate video analytics technology: ‘We need guardrails – the law is behind the tech’
Aaron Terrey, Director of Vixles Pty Ltd
Yep, this is obviously a relatively challenging topic. There’s, I think, 117 recommended changes from the Attorney General to the Privacy Commissioner, that’s going to change our Privacy Act. Some of that’s the breaches that we’ve had through cyber attacks, but others related to the emergence of AI technologies and what we need to do. We need these guardrails. Like any technology, the law’s behind where the technology is. I think generally technologists like us, we all want to ensure that it’s done responsibly and ethically. But we do need these laws to come in to try and make sure we’ve got the guardrails, and we understand what’s right or what’s wrong.Â
Put strict parameters around data retention
Aaron Terrey
John, you were absolutely joking about the robot that was being kicked, that will come back in 10 years time and hunt you out. But we need to be extremely proactive in ensuring that data is not unnecessarily retained. So your robot will not come back, should not come back, because it shouldn’t remember you after a period of time. And I think, like any data… we’ve always had data. Whether it’s emails, or whether it’s HR documents, we’ve always had it. I think the change now is that there’s a lot of computer vision and AI components coming into the field that we have around security. We need to ensure, as leaders and technologists, that we follow those same rules. So we do it ethically and responsibly. Â
Aaron Terrey
I’m working on a couple of projects at the moment, with facial recognition, where there was absolutely no need to… actually keep that information. So unless there’s actually a trigger to show that there’s an alert, across to a match that’s in the database, all the information should absolutely be deleted — completely. So the metadata, the images, the recorded video, and there’s really good ways of managing this process and getting it through the business.Â
Involve risk stakeholders early in the process
Aaron Terrey Â
And I think, just on a final point, we’ve seen a lot of technology, where it does involve personal information, in particular, facial recognition, but also other technologies around AI… getting the business involved early, where different stakeholders within the business can actually go through and understand what the risks are upfront, [is very important]. So getting that Privacy Impact Assessment done, [will help you to] work and build the technology around mitigating those risks.
Planning for and mitigating bias in video analytics technology
John Bigelow
So Patrick, obviously you work in an area where you’re developing new and cutting edge solutions all the time, whether it be in behavioral analysis, or other areas. We’ve already seen [with] some of the analytics that are [being produced currently], there can be inadvertent coded biases, which lead us into that legal, ethical regulatory framework minefield. [And oftentimes], we just don’t know what we don’t know until it emerges. How, in your opinion, do we be mindful of those sorts of things and know what we need to implement and when?
Patrick Elliott, CEO and Cofounder at VisualCortex
So I think it’s probably on both sides of the coin. So there’s what do we need to implement and when? We’re all in a world where we talk about diversity a lot. The same thing has to happen in the way we’re training our machine learning models. There’s certain parts of the world where models are very biased, because all of the images that were used to train the model were based on a specific kind of look and feel. So by introducing diversity, and forcing diversity into the training sets, it gives us a broader and hopefully less biased perspective.Â
Patrick Elliott
On the other side, regardless of what comes up, you’re responsible for it [as a developer of machine learning models]. So test test, test, test, test and test, right? And I’m not just talking about ‘does it work?’ Test the outcomes. In machine learning [for computer vision software], there are always these things called Confidence Scores — so the computer… is 87% confident that that is a speaker object, right? You have to go back to the ground truth. You have to go back and [manually check your model’s performance against] actual video footage.Â
I think if you don’t have that responsibility on both sides, it’s gonna be very difficult to get a genuine outcome.
Is synthetic data to video analytics what quota systems are to HR?
John Bigelow Â
Michael, I’m gonna hand it over to you to land this plane. You get to work with this stuff all day, every day. What sorts of things are you seeing — from a legal, ethical and regulatory point of view — that you think we need to be aware of, and making sure that we’ve got in place, when dealing with Artificial Intelligence?
Michael Lang , Solutions Architect Manager at NVIDIA
So the legal, the compliance and the ethical aspects have been well known for a very long time. All you have to do is look at medical boards. We’ve been doing this for a while. I think it’s that reflective meta analysis — and Patrick refers to that… going back and making sure we understand from the start point that the data is biased. Because it has been, right? No matter what data we have at the moment, it’s biased in the same way that society is biased. So coming back to saying, ‘Do we insert synthetic data in there to tilt the scales?’. To balance that out.Â
Michael Lang Â
We could talk about, for example, quotas and society. We have affirmative biases for a reason. You have to ensure that we tip the balance. So do we need to do that with data as well? Do we have that long term review? Do we reincorporate and re-train, as things change over time? Do we not accept the data that we have to start with — [because] It’s a bad starting point? And that’s a really hard one, because [maybe] we’re not going to do it now, because we don’t like the data.Â
Michael Lang Â
There’s a lot of issues there. But I think accepting, upfront, the fact that we are starting from a flawed human model — and that is fundamentally a problem — that’s probably one of the most important parts. That way we know we’re looking to eliminate biases, not just to say ‘it’ll be right’, and we’ll see if it’s not okay later on. You’ve gotta look for that. You’ve gotta be proactive. It’s important.
Where to next?
Keen to hear more? Check out the full panel discussion – Using Artificial Intelligence to solve real-world customer challenges – here: https://visualcortex.com/2023/11/29/solving-real-world-challenges-with-video-analytics-software/
About VisualCortex
VisualCortex is making video data actionable in the enterprise. Its Video Intelligence Platform provides the stability and flexibility to productionize computer vision technology at scale. Able to be used for any video analytics use case in any industry, VisualCortex’s production-ready cloud-based environment transforms video assets into analyzable streams of data.
The VisualCortex platform delivers the artificial intelligence smarts, governance and usability, enabling organizations to connect any number of video streams, repositories and use existing commodity hardware. An intuitive user interface, out-of-the-box reporting, range of configurations and integrations empower non-technical people to produce, analyze and act on insights derived from computer vision throughout the enterprise. Organizations can easily combine these AI-generated video insights with other data sources and systems to facilitate both real-time operations and strategic analysis. The VisualCortex Model Store also provides a secure marketplace for customers, partners and independent machine learning experts to share quality controlled computer vision models.
For more information, visit www.visualcortex.com
For regular updates, follow VisualCortex on Twitter (@VisualCortexApp), LinkedIn (VisualCortex), YouTube (VisualCortex) and Facebook (@VisualCortexApp).
For regular industry news and analysis, subscribe to VisualCortex’s mailing list here: visualcortex.com/contact-us