In my view, technology is best served up with a heaping side of science. This is especially true when it comes to computer security where a distinct absence of science is a problem.
I began my career at Reliable Software Technologies (Cigital) in 1995 as a research scientist, and the early days of Cigital Labs are still close to my heart. I spent most of my 21 years at Cigital helping run things and advising other businesses on technology issues. Though I remain active in the computer science research community and interact closely with academics all over the world, I am best known as an active technologist in the corporate world. Cigital was acquired by Synopsys in 2016. I stopped working for Synopsys in 2019. I co-founded the Berryville Insitute of Machine Learning (BIML) in 2019.
My approach during graduate school at Indiana University way back in the day was to master a subject by publishing a set of peer reviewed articles. Neural networks, genetic algorithms, case-based reasoning, robotics, and Doug Hofstadter’s models were all important targets. When I first started working in computer security I did the same publishing thing. But the world has changed since then, and so have I.
I continue to believe that giving back to academia is essential. I try to deliver a handful of academic talks at various schools every year, with periodic stops at: University of Maryland, Indiana University, Johns Hopkins, North Carolina State University, James Madison University, Purdue, Waterloo, Stanford, and other schools. If I am on the road for business reasons, I try to seek out a great nearby school to visit.
I have acted as an official Advisor to the Computer Science Department at UC Davis and the Computer Science Department at the University of Virginia (where we created a BA in the College of Arts and Sciences). I continue to serve on the Dean’s Advisory Council of the Luddy School of Informatics, Computer Science, and Engineering at Indiana University. In 2013 I was awarded a Career Achievement Award from Indiana University.
Back in 1999, I was asked to chair the Infosec Research Council’s study on Malicious Code. The result of that collaborative effort was a paper published in IEEE Software called Attacking Malicious Code: A Report to the Infosec Research Council. In 2009, that paper was chosen as one of IEEE Software’s 25th-Aniversary Top Picks, meaning it was one of 35 recommended papers selected from a pool of over 1200.
In 2005, I was elected to a three year term on the Board of Governors of the IEEE Computer Society.
In 2009, Cigital released the first iteration of the Building Security In Maturity Model (BSIMM). The BSIMM project soon escaped the test tube and is quickly becoming a de facto standard for measuring software security. BSIMM9, the last version I was involved with, was built built directly out of data observed in 120 software security initiatives, from firms including: Adobe, Aetna, Alibaba, Amgen, ANDA, Autodesk, Axway, Bank of America, Betfair, BMO Financial Group, Black Duck Software, Black Knight Financial Services, Box, Canadian Imperial Bank of Commerce, Capital One, City National Bank, Cisco, Citigroup, Citizen’s Bank, Comerica Bank, Cryptography Research (a division of Rambus), Dahua, Depository Trust & Clearing Corporation, Ellucian, Experian, F-Secure, Fannie Mae, Fidelity, Freddie Mac, General Electric, Genetec, Global Payments, Highmark Health Solutions, Horizon Healthcare Services, Inc., HSBC, Independent Health, iPipeline, Johnson & Johnson, JPMorgan Chase & Co., Lenovo, LGE, LinkedIn, McKesson, Medtronic, Morningsta, Navient, NCR, NetApp, NewsCorp, NVIDIA, NXP Semiconductors N.V., PayPal, Principal Financial Group, Qualcomm, Royal Bank of Canada, Scientific Games, Sony Mobile, Splunk, Synopsys SIG, Target, TD Ameritrade, The Advisory Board, The Home Depot, The Vanguard Group, Trainline, Trane, U.S. Bank, Veritas, Verizon, Wells Fargo, Zendesk, and Zephyr Health. You may download a copy of the BSIMM for yourself and use it under the Creative Commons license.
After 2019, I was no longer involved with the BSIMM project which, in my view, has been improperly continued by Synopsys. Your mileage may vary with current iterations.
In 2014 I helped to found the IEEE Center for Secure Design through the IEEE Computer Society. The first IEEE CSD report (which was the first IEEE publication ever to be released under the Creative Commons) is called Avoiding the Top Ten Software Security Design Flaws.
After two years of study, the CISO Report was publicly released in January 2018. The report describes data gathered in a series of extended in-person interviews with 25 CISOs. The firms we chose to study include ADP, Aetna, Allergan, Bank of America, Cisco, Citizens Bank, Eli Lilly, Facebook, Fannie Mae, Goldman Sachs, HSBC, Human Longevity, JPMorgan Chase, LifeLock, Morningstar, Starbucks, and U.S. Bank. Collectively, our population represents just under 150 years of experience in the CISO role. This is joint work with Sammy Migues and Brian Chess. The CISO report is available here.
Research at the Berryville Institute of Machine Learning focuses on machine learning security—the idea of building security in as applied to machine learning technology itself. Our groundbreaking 2020 work introduced the BIML-78, a set of security risks associated with a generic ML process model. This early work has been applied in the Irius Risk threat modeling tool, at Google for internal analysis, and by the United States Air Force in research grant solicitations.
On January 24. 2024, BIML released An Architectural Risk Analysis of Large Language Models in which we present a basic architectural risk analysis (ARA) of large language models (LLMs), guided by an understanding of standard machine learning (ML) risks as previously identified by BIML.
We continue active research on ML risk grounded in a working theory of distributed representation.
At BIML, we believe that moving beyond “adversarial AI” and all-too-prevalent “attack of the day” approach to AI security is essential. In our view, the presence of an “adversary” is by no means necessary when it comes to design-level risks in machine learning systems. That is, sometimes risks don’t require a specific attacker with specific motivations to be risks. Insecure systems invite attacks (whether or not such attacks may have yet been discovered). That’s why we describe our field of work as “machine learning security” instead of “adversarial AI.”
This is joint work with Harold Figueroa, Victor Shepardson, Richie Bonett, and Katie McMahon.