In recent years, the utilization of data science, machine learning, and AI has increased substantially within the federal government, paving the way for boundless possibilities for future adoption, especially in the defense sector. AI can perform complex tasks like cyber security, detecting insider threats, and identifying objects using aerial imagery, but the most significant benefit of AI is its speed. The unparalleled speed that AI can process vast amounts of data and identify high-quality information has a multitude of uses, in particular for decision-makers. With non-kinetic warfare emerging as a dominant aspect of modern defense, the speed provided by AI is crucial in responding to threats in real-time. It can also simplify processes, such as eliminating error-prone spreadsheets and improving bureaucratic efficiency. Implementing AI in government settings will revolutionize decision-making, improving accuracy and effectiveness - whether that’s on-the-ground live missions or within chains of command, although there are challenges that need to be addressed before any large-scale adoption.
The Challenges of Adopting AI
The space for AI presents both opportunities and challenges, regardless of the type of industry. Some of the difficulties businesses face in the private sector also exist in the defense space, such as the barriers to getting started due to the rapidly changing landscape of AI technology or understanding the regulations and privacy concerns of working with data.
Investing in AI technology poses significant challenges due to the rapidly changing landscape of AI and according to Rob Dwyer, a KPMG Advisory Principal specializing in technology in government, “... it’s difficult even for people who focus on AI for a living to keep up with the market,” (source). With so much innovation occurring at once in the AI technology sphere, the rate of new advancements is outpacing the adoption rate of such technologies. On top of the rapid pace of development, with such a plethora of choices decision-makers are finding themselves paralyzed by choice and 75% agreed that “they struggle to select the best AI technologies” (source).
Security and more specifically, cyber security, is an ever-present challenge for new technological adoption within the defense sector. Unlike commercial environments, the cloud cannot be relied on in a contested battlespace environment as cybersecurity breaches and privacy violations are some of the greatest risks for AI adoption in defense. With such sensitive materials and data being processed, implementing AI in federal and security spaces requires a certain level of nuance and regulation that other industries may not face. The defense space poses an additional challenge as agile computing is essential for successfully deploying algorithms and making real-time decisions while simultaneously adhering to safety and privacy protocols can be somewhat of a paradox.
Aside from the technical and security challenges, the adoption of AI technology is being bottlenecked by the perception of what AI technology means and could do. Trust in machine learning systems is a significant obstacle, as 33% of US CEOs cite employee trust as a barrier to AI adoption, as revealed in a recent study by EY (source). Yet, CEO and business leaders are far more optimistic than their employees. According to a study done by EY, 87% of CEOs and business leaders completely or somewhat trust AI technology (source). The onus of implementing machine learning for companies falls on the C-suites, not only making sure their employees are well-educated on the nuances of AI and its usage but also upskilling the workforce to utilize such technology with confidence. Jeff Wong, EY Global Chief Innovation Officer, says: “Employees need to be able to trust, utilize and maximize the full potential of the technology, as well as see its benefits for scaled implementation to be successful in any organization.” Ultimately, AI is about collaboration, and it is imperative that we utilize this technology in congruence with human capability. A Bright Future Ahead
Corporate responsibility is not a new mission, but it has become a more complicated one as machine learning assumes a larger role in how work is done. As a matter of urgency and obligation, enterprises must consider how to address the immense societal impact that will come as work and decision-making change profoundly.
With companies ushering in the era of AI, we are facing unprecedented times of technological advancement and data processing, which poses new risks to safety for society, employees, and companies alike. Ethical data usage and security are at the forefront of these issues and require top-down control considering the long-term effects of AI implementation and decision-making.
TerraSense has been working with both government agencies and the private defense sector for the last few years, developing a fine-tuned understanding of the challenges faced by both. We’ve been researching and creating solutions that not only help with efficiencies but lead to safer decisions making by creating secure, ethical AI frameworks in conjunction with highly-trained employees and specialists in the field. Ultimately, implementing responsible AI narrows down to leaders and decision-makers developing strong oversights and policies, preparing and upskilling employees on proper usage of AI, and creating morally responsible and secure data frameworks. Find out more about what we’re creating here.
Comments