Reference walkthrough of the RealEyez AI Image Detection System, developed by Team VerifEyez. This project showcases an end-to-end machine learning pipeline and supporting cloud infrastructure for classifying AI-generated images. The repository documents dataset curation, model architecture, training and evaluation workflows, security considerations, and observed system limitations using visual artifacts to clearly communicate system behavior and performance.
RealEyez is a system designed to bring clarity and authenticity to a digital landscape increasingly challenged by synthetic media. As AI-generated imagery becomes more realistic, distinguishing what is real from what is artificial has become a critical problemβsometimes to the point where viewers are left wondering whether what they are seeing is genuine or entirely fabricated.
This walkthrough follows the story of RealEyez, highlighting how we combined cloud infrastructure, DevOps best practices, and machine learning techniques to design and deploy an application capable of detecting AI-generated images.
Airtable was used as our primary project management tool, providing a shared workspace to track requirements, assign team roles, and visualize timelines. Its intuitive interface allowed the team to coordinate tasks efficiently throughout development.
RealEyez was developed to address a growing trend in everyday digital consumption. Many people scroll through social media feeds to unwind, yet increasingly encounter images that raise a simple but important question: Is this real, or was it generated by AI?
With deepfake and generative image technologies advancing rapidly, the ability to reliably detect manipulated imagery has become more important than ever.
Our deployment strategy focused on building a secure, reliable, and fault-tolerant cloud infrastructure using DevOps best practices.
The CI/CD pipeline began with a branch validation stage to ensure changes met baseline quality checks.
A cleanup stage followed, designed to remove temporary and unused resources. Storage constraints and low disk volume issues surfaced frequently during early testing, and this stage ensured those problems did not reach production environments.
To protect the confidentiality, integrity, and availability of our system, we adopted a Defense-in-Depth security strategy.
Defense in Depth is a layered approach that applies multiple security controls across different system components to reduce overall risk.
Tools supporting this strategy included SonarQube, OWASP ZAP, Trivy, and Checkovβeach addressing a different layer of potential vulnerability, from application code to container dependencies and infrastructure configuration.
With the repository and codebase secured, we moved to packaging the application into container images.
Each container is a self-contained package that includes everything needed to run the application. Just as the same mobile app can run consistently across different phones, Docker images ensure consistent application behavior across environments.
Once the testing container image was running, we evaluated how the application handled real-world attack scenarios.
OWASP ZAP is a Dynamic Application Security Testing (DAST) tool that simulates real-world attacks to evaluate how an application behaves under pressure.
When new code is pushed to the repository, a GitHub webhook triggers execution of the Jenkins pipeline.
In our case, the entrypoint script runs database migrations before launching the Django application server. Migrations ensure the database schema is properly initialized or updated before the application becomes available.
Once complete, the application is accessible to users through port 8000, which is exposed by the container.
Trivy scans container images for vulnerabilities in operating system packages and application dependencies, helping identify security risks early in the deployment lifecycle.
These findings led us to update several Python dependencies to more secure versions before deployment.
The infrastructure supporting RealEyez consists of over 40 cloud resources, all provisioned and managed using Terraform.
During the planning stage, Terraform generates an execution plan from the configuration files, allowing us to review proposed infrastructure changes before applying them.
It analyzes Terraform configurations to detect misconfigurations and violations of security best practices before infrastructure is deployed.
Prometheus and Grafana were selected as open-source monitoring tools to reduce project costs while maintaining observability.
We automated the installation, configuration, and integration of both tools as part of our infrastructure provisioning.
Prometheus collects metrics from Node Exporter instances running on application servers, while Grafana visualizes those metrics to provide real-time insight into system health.
Grafana consumes Prometheus data to create dashboards and alerts that summarize system behavior at a glance.
Alerts were configured for key metrics such as CPU utilization and network traffic, enabling proactive response before issues escalated.
A Convolutional Neural Network (CNN) is a deep learning model commonly used for image classification and object recognition tasks due to its ability to learn spatial features from multi-dimensional image data.
A labeled dataset containing both real and synthetic images was divided into training, validation, and testing sets to measure generalization performance and reduce overfitting.
We leveraged a pre-trained EfficientNetB0 model, fine-tuning it with additional layers for AI image detection.
EfficientNetB0 balances model depth, width, and resolution using compound scaling, enabling strong accuracy with relatively low computational cost.
To accelerate training, all images were stored in an Amazon S3 bucket for scalable and secure data access.
Model training was performed on a p3.2xlarge EC2 instance equipped with an NVIDIA V100 GPU, enabling parallel computation and faster convergence.
This would allow Django to focus on application logic while NGINX handles load balancing and traffic optimization.
Additional scalability could be achieved by adopting Kubernetes through Amazon EKS, which abstracts control plane management while enabling container orchestration.
With a $200 budget, infrastructure choices were carefully evaluated. For example, db.t3.micro instances were selected over larger alternatives to reduce cost, while allocating sufficient budget for a p3.2xlarge GPU instance used during model training.
In total, the project utilized less than 50% of the allocated budget, with final costs totaling $99.40.
- π Kura Labs
- πΌ LinkedIn β Joe Reynolds
- βοΈ joekuralabs@gmail.com
































