Writing penetration testing software containing artificial intelligence AI features raises several ethical concerns that should be considered. Always get consent and written permission from those subject to the testing which is typical with any penetration testing activity. Explicit permission is needed, as it may result in unauthorized access or compromise of data and solutions. It is entirely possible for AI features to behave in unexpected ways that should be considered prior to use.
The development of AI features must adhere to legal and ethical boundaries, including consideration for global privacy and confidentiality. Developers and users must make sure during the design phase any software does not cause harm whether intentionally or unintentional.
Great care is needed for any automated AI driven exploitation and “auto-compromise” features. A combination of both vulnerabilities that were previously unknown and AI logic could cause unexpected and severe damage. Additionally, the use of AI may introduce new vulnerabilities that go undetected. Consideration should be taken for adding kill switch and execute only on specific networks and never on internet connected systems. Scope should be strictly limited to test and development systems and test networks only.
Ultimately it will always be the responsibility of a human when things go wrong, an AI will never be transparent and accountable. It is highly recommended for any software developed containing AI features, to be vetted by legal experts, particularly in a commercial setting. A knife can be used to cut vegetables or kill a person, knife manufacturers will never worry over the ethical considerations, but the knife user can lose their liberty under extreme circumstances.