HomeArtificial IntelligenceNick Kathmann, CISO/CIO at LogicGate - Interview Collection

Nick Kathmann, CISO/CIO at LogicGate – Interview Collection


Nicholas Kathmann is the Chief Info Safety Officer (CISO) at LogicGate, the place he leads the corporate’s info safety program, oversees platform safety improvements, and engages with prospects on managing cybersecurity danger. With over 20 years of expertise in IT and 18+ years in cybersecurity, Kathmann has constructed and led safety operations throughout small companies and Fortune 100 enterprises.

LogicGate is a danger and compliance platform that helps organizations automate and scale their governance, danger, and compliance (GRC) packages. Via its flagship product, Threat Cloud®, LogicGate allows groups to determine, assess, and handle danger throughout the enterprise with customizable workflows, real-time insights, and integrations. The platform helps a variety of use instances, together with third-party danger, cybersecurity compliance, and inside audit administration, serving to corporations construct extra agile and resilient danger methods

You function each CISO and CIO at LogicGate — how do you see AI remodeling the duties of those roles within the subsequent 2–3 years?

AI is already remodeling each of those roles, however within the subsequent 2-3 years, I feel we’ll see a serious rise in Agentic AI that has the ability to reimagine how we take care of enterprise processes on a day-to-day foundation. Something that will normally go to an IT assist desk — like resetting passwords, putting in purposes, and extra — will be dealt with by an AI agent. One other crucial use case will probably be leveraging AI brokers to deal with tedious audit assessments, permitting CISOs and CIOs to prioritize extra strategic requests.

With federal cyber layoffs and deregulation tendencies, how ought to enterprises strategy AI deployment whereas sustaining a robust safety posture?

Whereas we’re seeing a deregulation pattern within the U.S., laws are literally strengthening within the EU. So, should you’re a multinational enterprise, anticipate having to adjust to world regulatory necessities round accountable use of AI. For corporations solely working within the U.S., I see there being a studying interval when it comes to AI adoption. I feel it’s necessary for these enterprises to type sturdy AI governance insurance policies and keep some human oversight within the deployment course of, ensuring nothing goes rogue.

What are the most important blind spots you see right this moment on the subject of integrating AI into current cybersecurity frameworks?

Whereas there are a few areas I can consider, essentially the most impactful blind spot can be the place your information is situated and the place it’s traversing. The introduction of AI is just going to make oversight in that space extra of a problem. Distributors are enabling AI options of their merchandise, however that information doesn’t all the time go on to the AI mannequin/vendor. That renders conventional safety instruments like DLP and net monitoring successfully blind.

You’ve stated most AI governance methods are “paper tigers.” What are the core elements of a governance framework that really works?

Once I say “paper tigers,” I’m referring particularly to governance methods the place solely a small workforce is aware of the processes and requirements, and they don’t seem to be enforced and even understood all through the group. AI may be very pervasive, which means it impacts each group and each workforce. “One measurement matches all” methods aren’t going to work. A finance workforce implementing AI options into its ERP is totally different from a product workforce implementing an AI function in a selected product, and the checklist continues. The core elements of a robust governance framework fluctuate, however IAPP, OWASP, NIST, and different advisory our bodies have fairly good frameworks for figuring out what to guage. The toughest half is determining when the necessities apply to every use case.

How can corporations keep away from AI mannequin drift and guarantee accountable use over time with out over-engineering their insurance policies?

Drift and degradation is simply a part of utilizing know-how, however AI can considerably speed up the method. But when the drift turns into too nice, corrective measures will probably be wanted. A complete testing technique that appears for and measures accuracy, bias, and different purple flags is important over time. If corporations wish to keep away from bias and drift, they should begin by guaranteeing they’ve the instruments in place to determine and measure it.

What function ought to changelogs, restricted coverage updates, and real-time suggestions loops play in sustaining agile AI governance?

Whereas they play a job proper now to cut back danger and legal responsibility to the supplier, real-time suggestions loops hamper the flexibility of consumers and customers to carry out AI governance, particularly if adjustments in communication mechanisms occur too often.

What issues do you have got round AI bias and discrimination in underwriting or credit score scoring, notably with “Purchase Now, Pay Later” (BNPL) companies?

Final yr, I spoke to an AI/ML researcher at a big, multinational financial institution who had been experimenting with AI/LLMs throughout their danger fashions. The fashions, even when educated on giant and correct information units, would make actually shocking, unsupported choices to both approve or deny underwriting. For instance, if the phrases “nice credit score” have been talked about in a chat transcript or communications with prospects, the fashions would, by default, deny the mortgage — no matter whether or not the shopper stated it or the financial institution worker stated it. If AI goes to be relied upon, banks want higher oversight and accountability, and people “surprises” should be minimized.

What’s your tackle how we must always audit or assess algorithms that make high-stakes choices — and who ought to be held accountable?

This goes again to the great testing mannequin, the place it’s essential to repeatedly check and benchmark the algorithm/fashions in as near actual time as attainable. This may be troublesome, because the mannequin output could have fascinating outcomes that may want people to determine outliers. As a banking instance, a mannequin that denies all loans flat out may have an important danger ranking, since zero loans it underwrites will ever default. In that case, the group that implements the mannequin/algorithm ought to be accountable for the end result of the mannequin, identical to they’d be if people have been making the choice.

With extra enterprises requiring cyber insurance coverage, how are AI instruments reshaping each the danger panorama and insurance coverage underwriting itself?

AI instruments are nice at disseminating giant quantities of information and discovering patterns or tendencies. On the shopper facet, these instruments will probably be instrumental in understanding the group’s precise danger and managing that danger. On the underwriter’s facet, these instruments will probably be useful find inconsistencies and organizations which are changing into immature over time.

How can corporations leverage AI to proactively scale back cyber danger and negotiate higher phrases in right this moment’s insurance coverage market?

At this time, one of the simplest ways to leverage AI for decreasing danger and negotiating higher insurance coverage phrases is to filter out noise and distractions, serving to you concentrate on crucial dangers. When you scale back these dangers in a complete approach, your cyber insurance coverage charges ought to go down. It’s too simple to get overwhelmed with the sheer quantity of dangers. Don’t get slowed down attempting to handle each single concern when specializing in essentially the most crucial ones can have a a lot bigger influence.

What are a couple of tactical steps you suggest for corporations that wish to implement AI responsibly — however don’t know the place to begin?

First, you have to perceive what your use instances are and doc the specified outcomes. Everybody desires to implement AI, however it’s necessary to consider your objectives first and work backwards from there — one thing I feel plenty of organizations wrestle with right this moment. After getting an excellent understanding of your use instances, you’ll be able to analysis the totally different AI frameworks and perceive which of the relevant controls matter to your use instances and implementation. Robust AI governance can also be enterprise crucial, for danger mitigation and effectivity since automation is just as helpful as its information enter. Organizations leveraging AI should accomplish that responsibly, as companions and prospects are asking powerful questions round AI sprawl and utilization. Not realizing the reply can imply lacking out on enterprise offers, instantly impacting the underside line.

When you needed to predict the most important AI-related safety danger 5 years from now, what wouldn’t it be — and the way can we put together right this moment?

My prediction is that as Agentic AI is constructed into extra enterprise processes and purposes, attackers will interact in fraud and misuse to govern these brokers into delivering malicious outcomes. Now we have already seen this with the manipulation of customer support brokers, leading to unauthorized offers and refunds. Risk actors used language methods to bypass insurance policies and intervene with the agent’s decision-making.

Thanks for the good interview, readers who want to be taught extra ought to go to LogicGate

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments