This SSDLC document outlines top coding security practices specifically tailored for Canal — a team working on a Django backend, hosted on AWS Aurora and orchestrated via Kubernetes (k8s), interfacing with a Next.js frontend through Apollo GraphQL, and deployed on Vercel. The focus is on daily practices and considerations that contribute to maintaining a high-security posture throughout the development and deployment phases.
Canal follows a security-first approach in its SSDLC. Security considerations are integrated into every stage of the development process, from the initial design phase to the final deployment. This includes:
SSDLC & AI:
AI Model Threat Modeling: Introduce threat modeling for AI components, focusing on potential adversarial attacks (e.g., model poisoning or backdoor attacks) during training or inference. This helps identify and mitigate AI-specific risks during the design phase .
Data Privacy in AI: Ensure that all datasets used for training AI models, especially those containing sensitive data (e.g., PII), are anonymized or use techniques like differential privacy to protect data during both the model training and deployment stages .