Generative AI

Machine Learning: Make Cybersecurity a Priority

Machine Learning (ML) is well established in cybersecurity, internet search engines, healthcare diagnostics, supply chain optimization and many other use cases. Billions of dollars are being invested in ML by organizations that use it to analyze data to help them figure out how to improve things like decision-making or customer satisfaction. According to a recent survey, 83% of respondents’ companies have achieved moderate or substantial benefits.

Some people think ML is a magic bullet, but that is disputable. You can’t just flip an ML switch to achieve results that increase profitability, accuracy or efficiency. Success with ML relies on the quality of the data fed into an ML system and the methods used to train the system. And the choice of training method – supervised learning, unsupervised learning or reinforcement learning or a combination – depends on specific project variables and goals.

Launching an ML system should not be thought of as a discrete activity. ML projects involve resources and touch many areas of an organization, so they require thoughtful, thorough planning and execution with a clear security strategy embedded in the plan. Strategies to reduce the chances of having large barriers to success include approaches like leveraging current investments whenever possible. Many organizations already have technology that can be used in ML systems. Spark, for example, offers ML libraries, APIs and a developer support community. Open-source tools and platforms also are potential resources.

Another strategy is to build a strong ML team that can set strategy and direction, define use cases, identify realistic goals and oversee the work. A lot of decisions need to be made. Just think about the effort of defining the data strategy, the build strategy (in-house versus outsourcing versus a combination), ML methodologies and ML metrics, not to mention engaging the governance, risk, legal, compliance and IT/security groups and all the cybersecurity implications if not done well and in a comprehensive manner.

Security should be built in to cover ML systems and connection points from end to end. The stakes are high, given the likely use of sensitive data – think about consumer health information used to improve diagnostic capabilities or customer purchase details used to improve relationships and experience. The bad guys are paying attention and may be using ML themselves. Engaging a cybersecurity provider with a strong understanding of ML implications and connections to other areas of the business is critical to organizational defense.

A good first step with cybersecurity for ML initiatives is to assess your current security strategy and solution to understand how to leverage what’s already in place. In addition, when applying security measures, it is always easier when data is mapped, classified and aligned with data privacy regulations like General Data Protection Regulation (GDPR) and Health Insurance Portability and Accountability Act (HIPAA) during planning stages. While regulations may not always be clear, it’s advisable to adopt, or even go beyond any rigorous standards.

Learn more about ML training methods, adoption rates, business considerations, security priorities and best practices for implementing ML. Read “Machine Learning: Key Adoption Cybersecurity Considerations.”

Greg Baker is global VP/GM of cyber digital transformation at Optiv Inc. Read more Optiv blogs here.