My Journey Deploying Laravel on AWS EKS (The Complete Story)

The Challenge

Picture this: You’ve built a beautiful Laravel application. It works perfectly on your local machine. Your staging server handles it well. But now you’re facing real growth, and suddenly those traditional hosting solutions don’t seem so adequate anymore. Sound familiar?

That was me six months ago. I needed horizontal scaling, high availability, zero-downtime deployments, and the peace of mind that comes with enterprise-grade infrastructure. After weeks of research and plenty of trial and error, I successfully deployed our Laravel application on AWS Elastic Kubernetes Service (EKS).

This isn’t just another tutorial, it’s the complete story of what worked, what didn’t, and everything I wish someone had told me before I started.

Why I Chose Kubernetes (And Why You Might Too)

The Breaking Point

Our application may experience traffic spikes during business hours. With traditional hosting, we had two bad options: over-provision resources and waste money during quiet hours, or under-provision and watch the application struggle during peak times. Neither was acceptable.

Kubernetes promised something different: the ability to scale horizontally based on actual demand. One instance struggling? Spin up another automatically. Traffic died down? Scale back down. It was exactly what we needed.

The Hidden Benefits

What I didn’t expect were the other advantages:

  • Declarative configuration: Your entire infrastructure is defined in YAML files, version-controlled in Git. Want to see what changed last month? Just look at the git history.
  • Self-healing: Containers crashed, Kubernetes automatically restarts them without human intervention.
  • Zero-downtime deployments: Rolling updates mean your users never see a maintenance page.
  • Consistency across environments: The same configuration works in development, staging, and production.

Was it worth the learning curve? Absolutely. But let me walk you through how to avoid the pitfalls I encountered.

Understanding the Architecture (Before We Build)

Before diving into commands, let’s understand what we’re building. 

The Big Picture

Our architecture consists of several layers, each serving a specific purpose:

Layer 1: The Application Layer: This is where your Laravel application lives. But here’s the interesting part, we don’t run PHP and Nginx on separate servers. Instead, they run as separate containers in the same “pod” (Kubernetes’s term for a group of containers that work together). Why? They need to talk to each other constantly, so keeping them together reduces latency and simplifies networking.

Layer 2: The Data Layer: Instead of managing MySQL ourselves, we use Amazon RDS. This was a game-changer. No more worrying about backups, replication, or database crashes. AWS handles all of that. The database lives in a private subnet, no public access and only our Kubernetes cluster can reach it.

Layer 3: The Storage Layer: Here’s a challenge most tutorials gloss over: file uploads. When you have multiple application instances (pods), where do uploaded files go? If user A uploads a file to pod 1, and user B tries to access it but hits pod 2, the file appears missing. Or if a pod gets deleted and a new one spins up, all the old files disappear.

The solution? Amazon EFS (Elastic File System) volumes that mount to all pods simultaneously. Every pod sees the same files. Problem solved. Unlike EBS volumes which can only attach to one pod at a time, EFS provides true shared storage across your entire cluster.

 I went with persistent volumes for simplicity, but S3 is more scalable for large applications and will be the better option to consider.

Layer 4: The Network Layer: Users don’t connect directly to your pods. Traffic flows through an AWS Application Load Balancer, then to the Nginx Ingress Controller, which routes requests to the appropriate service. This layer also handles SSL termination, Let’s Encrypt certificates are automatically issued and renewed.

Layer 5: The Control Layer: The EKS control plane orchestrates everything. It decides which nodes run which pods, handles scaling, monitors health, and coordinates deployments. You don’t manage this directly, AWS does.

Now that we understand what we’re building, let’s build it.

Setting Up the Foundation

The Toolbox

First, we need the right tools, install these specific tools:

AWS CLI, kubectl, eksctl, Helm, and Docker. These aren’t just nice-to-haves; each serves a critical purpose. The AWS CLI talks to AWS services, kubectl communicates with Kubernetes, eksctl simplifies EKS operations, Helm manages Kubernetes packages, and Docker builds your images.

The installation varies by operating system, but the principle is the same: install official versions from official sources. Avoid package managers for Docker on Linux, they often have outdated versions that cause weird issues later.

AWS Credentials: Getting It Right

Here’s something that tripped me up initially: IAM permissions. You need extensive permissions to create EKS clusters. Don’t use your root account (never use root for anything), but do create an IAM user with these policies:

  • EC2 and EKS full access (for creating clusters and nodes)
  • ECR full access (for Docker images)
  • RDS full access (for the database)
  • VPC full access (Kubernetes needs to create networking resources)
  • IAM full access (to create service accounts)

In production, you’d want more granular permissions, but for getting started, this gets you moving without constant “access denied” errors.

Configure the CLI with aws configure and verify it works with aws sts get-caller-identity. If you see your user details, you’re good.

The Database: Why RDS Changes Everything

I initially considered running MySQL inside Kubernetes. “It’s just another container,” I thought.

Databases are stateful. They need reliable storage, consistent networking, and careful resource management. More importantly, they need backups, point-in-time recovery, and automated failover. Building all that yourself is a full-time job.

RDS gives you all of this out of the box. Create a MySQL 8.0 instance through the AWS Console. Choose db.t3.micro to start (you can scale later). Enable automated backups with a 7-day retention period.

Critical security note: Set “Public Access” to No. Your database should never be accessible from the internet. We’ll configure security groups to allow access only from the EKS cluster.

The database takes about 10 minutes to create. While waiting, let’s talk about the cluster.

Creating the EKS Cluster: Where the Magic Happens

This is the centrepiece of our infrastructure. One command creates an entire production-ready Kubernetes cluster:

eksctl create cluster \

  –name laravel-app-cluster \

  –region ap-south-1 \

  –nodegroup-name workers \

  –node-type t3.medium \

  –nodes 2 \

  –nodes-min 1 \

  –nodes-max 3 \

  –managed

But what’s actually happening here? Let me break it down because understanding this saved me countless hours of troubleshooting.

The Control Plane: AWS creates and manages the Kubernetes master nodes. You never see these machines, never SSH into them, never worry about their patches or updates. AWS handles everything.

The VPC: eksctl creates a new VPC with public and private subnets across multiple availability zones. Your application pods run in private subnets (no direct internet access), while load balancers sit in public subnets.

The Node Group: These are the EC2 instances where your containers actually run. We’re starting with two t3.medium instances (2 vCPU, 4GB RAM each). The “managed” flag means AWS handles OS patches and updates.

The IAM Roles: Multiple IAM roles are created: one for the cluster itself, one for the nodes, and others for specific services. This granular permission model is how Kubernetes securely interacts with AWS services.

This takes 15-20 minutes. 

Connecting the Database to the Cluster

Remember that RDS security group we created? Now we need to tell it: “Allow traffic from the EKS cluster.”

Get the cluster’s security group ID from aws eks describe-cluster, then add it to the RDS security group’s inbound rules. Allow MySQL (port 3306) from that security group only.

This is security in action. The database accepts connections only from pods running in your cluster. No one else can reach it, even if they somehow got the connection string.

The Kubernetes Infrastructure

Storage: The Problem Everyone Forgets

Containers are ephemeral; they can be destroyed and recreated at any moment. If you store files inside a container, those files vanish when the container dies.

For a Laravel application, this is catastrophic. User uploads, generated PDFs, cached images, all gone.

The solution is the EBS CSI (Container Storage Interface) driver. This lets Kubernetes create and manage Amazon EBS volumes, real, persistent disks that survive container restarts.

Setting this up requires three steps, and yes, they’re all necessary:

First, associate an OIDC provider with your cluster. This allows Kubernetes service accounts to assume IAM roles, a critical security feature.

Second, create an IAM service account with permissions to create EBS volumes. This is how Kubernetes gets permission to talk to the EBS API.

Third, install the EBS CSI driver as an EKS add-on. This installs the controller pods that handle volume creation and mounting.

Test it by checking if ebs-csi-controller pods are running in the kube-system namespace. If they are, you’re ready for persistent storage.

The Ingress Controller: Your Application’s Front Door

Users need a way to reach your application. In traditional hosting, you’d configure Nginx or Apache. In Kubernetes, you use an Ingress Controller.

Think of it as a smart reverse proxy. It receives all incoming traffic, reads the HTTP headers, and routes requests to the appropriate service. It also handles SSL termination, load balancing, and can even implement rate limiting or authentication.

We use Nginx Ingress Controller because it’s mature, well-documented, and works seamlessly with AWS. Install it via Helm, Kubernetes’s package manager. Helm is like Composer for Kubernetes; it installs complex applications with predefined configurations.

When you install the Ingress Controller, AWS automatically creates an Application Load Balancer. Check the service to get its DNS name, you’ll need this for configuring your domain.

SSL Certificates: Automation is Everything

cert-manager is a Kubernetes add-on that automates certificate management. It talks to Let’s Encrypt, proves you control the domain (via HTTP challenge), obtains certificates, and automatically renews them before expiration.

Install cert-manager with a single kubectl command. It runs in its own namespace, continuously monitoring for certificate requests. When you deploy your application with an Ingress resource, cert-manager sees it and automatically provisions a certificate. No manual intervention needed.

DNS: Connecting Your Domain

You have a load balancer. You have cert-manager ready. Now you need to point your domain to the load balancer.

If you use Route 53 (AWS’s DNS service), create an A record alias pointing to the Application Load Balancer. AWS handles the IP addresses for you.

Using another DNS provider? Create a CNAME record pointing to the load balancer’s DNS name. Either way works, but Route 53 is slightly faster because it integrates directly with AWS infrastructure.

DNS propagation takes time, sometimes 5 minutes, sometimes an hour. Use nslookup or dig to check if it’s resolving correctly. Patience here saves frustration later.

Preparing Your Laravel Application

The Dockerfile: Containerizing Laravel

This is where your application becomes cloud-native. A Dockerfile is a recipe for building a container image. It tells Docker: start with this base image, install these dependencies, copy these files, run these commands.

For Laravel, we start with the official php:8.2-fpm image. Why FPM (FastCGI Process Manager)? Because it’s designed for production. It manages PHP worker processes efficiently, handles concurrent requests well, and integrates perfectly with Nginx.

The Dockerfile has several critical sections:

System Dependencies: We install git, curl, and various libraries needed for PHP extensions. Laravel needs image manipulation (GD), database connectivity (PDO), and internationalization support (intl).

PHP Extensions: These aren’t included in the base image. We compile and install them: pdo_mysql for database, mbstring for string handling, bcmath for arbitrary precision math (required by Laravel’s encryption), and others.

PHP-FPM Configuration: Here’s a gotcha, by default, PHP-FPM only listens on localhost (127.0.0.1). In a multi-container pod, Nginx can’t reach it. We modify the configuration to listen on all interfaces (0.0.0.0).

Composer Dependencies: Install with –optimize-autoloader and –no-dev flags. This creates an optimized autoloader and skips development dependencies, reducing image size and improving performance.

Laravel Optimization: We run artisan commands to cache configurations, routes, and views. This eliminates file parsing on every request, a significant performance boost.

Permissions: Laravel needs write access to storage and bootstrap/cache directories. Set ownership to www-data (the user PHP-FPM runs as) and appropriate permissions.

The .dockerignore file is equally important. It tells Docker which files to exclude from the image: version control files, local environment files, dependencies (they’ll be reinstalled), and temporary files. This keeps images small and builds fast.

Building the Image: Local to Cloud

Build your image with docker build -t my-laravel-app . The first build takes several minutes, Docker is downloading base images, installing dependencies, and creating layers. Subsequent builds are faster thanks to layer caching.

Before pushing to ECR, authenticate Docker with AWS. The command looks intimidating but it’s just piping your ECR password to Docker login.

Tag your image with the full ECR repository URI, then push. The first push uploads all layers (might take a few minutes). Future pushes only upload changed layers, Docker’s layer system at work.

The Kubernetes Manifest (Where It All Comes Together)

This is the heart of your deployment, a single YAML file describing your entire application infrastructure. Let’s break it down piece by piece.

Namespace: Isolation and Organization

Every Kubernetes resource lives in a namespace. Think of it as a folder for organizing resources. We create a dedicated namespace for our application, keeping it separate from system components and other applications.

Why bother? In production, you might have multiple environments (staging, production) or multiple applications in the same cluster. Namespaces prevent naming conflicts and allow you to apply policies per namespace.

ConfigMap: Non-Sensitive Configuration

Laravel needs dozens of environment variables: APP_NAME, APP_ENV, database credentials, API keys, and more. ConfigMaps store this configuration.

Here’s an important distinction: ConfigMaps are for non-sensitive data. They’re not encrypted at rest. Anyone with read access to the cluster can see them. So what goes here? Application name, environment (production), database host, connection settings; things that aren’t secrets but configure behavior.

Secrets: The Sensitive Stuff

APP_KEY and DB_PASSWORD go here. Secrets are base64 encoded (not encrypted, but obfuscated). In a real production environment, you’d use AWS Secrets Manager or a solution like Sealed Secrets for true encryption.

To encode values, use: echo -n “your-value” | base64. The -n flag is critical, it prevents adding a newline character that would corrupt your secret.

Storage Classes and Persistent Volume Claims

This is where we define our storage needs. We create three persistent volumes:

Application files (5GB): The entire Laravel codebase lives here. Using a persistent volume means application files persist across pod restarts and are shared among all pods.

Uploads (10GB): User-uploaded files need reliable storage. This volume is mounted to Laravel’s storage/app/public directory and symlinked to public/storage.

Logs (5GB): Laravel logs can grow large. A dedicated volume prevents logs from consuming application space and makes log analysis easier.

The StorageClass specifies we want gp3 (General Purpose SSD) volumes from EBS. The volumeBindingMode: WaitForFirstConsumer is clever, it ensures volumes are created in the same availability zone as the pod, reducing latency.

The Nginx ConfigMap: Web Server Configuration

We need Nginx configuration but don’t want to build it into the Docker image (that would require rebuilding the image for every config change). Solution? Store it in a ConfigMap and mount it as a file.

The configuration does several important things:

PHP-FPM integration: Nginx doesn’t execute PHP. It proxies PHP requests to PHP-FPM via FastCGI. The upstream block defines where PHP-FPM listens (localhost:9000, they’re in the same pod).

Laravel routing: The location blocks ensure all requests route through index.php, except for static assets (JS, CSS, images) which are served directly by Nginx for performance.

File upload limits: Set to 50MB by default. Laravel won’t process uploads larger than this, protecting against denial-of-service attacks.

Health checks: The /health endpoint returns a simple 200 OK. Kubernetes uses this to determine if the container is healthy. If health checks fail, Kubernetes restarts the container.

Security headers: We deny access to hidden files (anything starting with a dot), preventing exposure of .env or .git files.

The Deployment: The Main Event

Init Container: This runs before the main containers start. Its job is critical, it sets up the Laravel environment.

It copies application files to the persistent volume (first boot only), creates necessary directories, sets up symlinks for storage and public directories, and waits for the database to be available (using netcat to check if port 3306 is open).

It then runs Laravel commands: clearing caches, running migrations, creating storage links. By the time init container completes, the application is fully set up.

PHP-FPM Container: This is your Laravel application. It runs PHP-FPM in the foreground, processing PHP requests proxied from Nginx.

It mounts three volumes: the application files, uploads directory, and logs directory. Environment variables come from ConfigMap and Secrets.

Resource Limits: We specify memory (256Mi request, 512Mi limit) and CPU (250m request, 500m limit). The “request” is what Kubernetes guarantees; the “limit” is the maximum allowed. This prevents one pod from starving others of resources.

Probes: Liveness probes determine if the container is healthy (if not, restart it). Readiness probes determine if it’s ready to receive traffic (if not, remove it from load balancing). We check if port 9000 is responding, simple but effective.

Nginx Container: Runs alongside PHP-FPM in the same pod. It receives HTTP requests on port 80 and proxies PHP requests to PHP-FPM.

It mounts the same volumes (needs to read application files to serve static assets) and uses the ConfigMap for its configuration.

Its probes hit the /health endpoint we configured earlier.

The Service: Internal Networking

A Service provides a stable IP address for accessing pods. Even though pods can be destroyed and recreated (with new IP addresses), the Service IP remains constant.

Type ClusterIP means this service is internal, only accessible from within the cluster. External access comes through the Ingress.

The Ingress: External Access with SSL

This is where everything comes together. The Ingress resource defines how external traffic reaches your application.

Annotations configure behavior:

  • cert-manager.io/cluster-issuer: Tells cert-manager to automatically provision a certificate
  • nginx.ingress.kubernetes.io/ssl-redirect: Forces HTTPS (redirects HTTP to HTTPS)
  • proxy-body-size: Allows large uploads
  • CORS settings: If your application serves API requests from different domains

TLS section: Specifies the domain and where to store the certificate. cert-manager sees this and automatically requests a certificate from Let’s Encrypt.

Rules: Define routing. Requests to your domain get routed to the laravel-service on port 80.

Deployment Time

Applying the Manifest

The moment of truth: kubectl apply -f k8s/laravel-manifest.yaml. This single command creates everything: namespace, ConfigMaps, Secrets, storage, deployments, services, and Ingress.

Watch the magic happen with kubectl get pods -n laravel-app -w. You’ll see:

  1. Pods being created
  2. Init container running (this might take a few minutes, it’s installing dependencies)
  3. Main containers starting
  4. Status changing to Running

Common issue: If the pod shows CrashLoopBackOff, check logs immediately: kubectl logs <pod-name> -n laravel-app -c laravel-setup. Usually it’s an environment variable issue or database connectivity problem.

The Certificate Dance

Watch the certificate status: kubectl get certificate -n laravel-app. You’ll see:

  • Pending: cert-manager is requesting the certificate
  • Ready: False: Let’s Encrypt is verifying domain ownership
  • Ready: True: Certificate issued successfully!

This takes 2-10 minutes. cert-manager performs an HTTP-01 challenge: Let’s Encrypt makes a request to your domain, cert-manager responds, proving you control it.

If it stays pending for more than 10 minutes, check cert-manager logs. Usually it’s DNS not resolving correctly or the domain not pointing to the load balancer.

First Access

Open your browser and navigate to your domain. If you see your Laravel application with a valid SSL certificate (green lock in the browser), congratulations! You’ve successfully deployed Laravel on Kubernetes.

Test the database connection by accessing a page that queries the database. Check logs if anything fails: kubectl logs <pod-name> -n laravel-app -c laravel -f. The -f flag follows the log in real-time.

Running Migrations

Your database is empty. Run migrations: kubectl exec -it <pod-name> -n laravel-app -c laravel — php artisan migrate –force. The –force flag is necessary in production (Laravel requires it as a safety check).

Verify with php artisan migrate:status to see which migrations ran.

Day-to-Day Operations

Updating Your Application

Here’s where Kubernetes really shines. Make code changes locally, build a new Docker image, push it to ECR, then: kubectl rollout restart deployment/laravel -n laravel-app.

Kubernetes performs a rolling update, it creates new pods with the new image, waits for them to be ready, then terminates old pods. Your users never see downtime. Watch it happen: kubectl rollout status deployment/laravel -n laravel-app.

Scaling for Traffic

Black Friday coming up? Scale horizontally: kubectl scale deployment laravel -n laravel-app –replicas=5. Five pods now handle traffic, distributed by the load balancer. Traffic died down? Scale back to 1.

Better yet, set up Horizontal Pod Autoscaler to scale automatically based on CPU or memory usage. But that’s a topic for another day.

Debugging Production Issues

Pod not responding? Check its status: kubectl describe pod <pod-name> -n laravel-app. Look at the Events section, it tells you what went wrong.

Application error? Check logs: kubectl logs <pod-name> -n laravel-app -c laravel. For errors that happened before a crash: kubectl logs <pod-name> -n laravel-app -c laravel –previous.

Database issue? Execute artisan tinker inside the pod: kubectl exec -it <pod-name> -n laravel-app -c laravel — php artisan tinker. Then test the database connection directly.

Need to see environment variables? kubectl exec -it <pod-name> -n laravel-app -c laravel — env | grep DB_.

Updating Environment Variables

Need to change a setting? Edit the ConfigMap: kubectl edit configmap laravel-config -n laravel-app. This opens the ConfigMap in your default editor. Make changes, save, then restart the deployment to apply them.

For secrets, it’s safer to delete and recreate: update your manifest file, delete the old secret, apply the manifest, and restart the deployment.

Security in Production

The Principle of Least Privilege

Every component has only the permissions it needs:

  • Kubernetes service accounts with specific RBAC rules
  • RDS security groups allowing only EKS cluster traffic
  • IAM roles scoped to specific actions

Network Security

Private Subnets: Application pods run in private subnets. They can’t be accessed directly from the internet, all traffic flows through the load balancer.

Security Groups: Multiple layers of security groups control traffic flow. RDS accepts connections only from EKS. EKS nodes accept connections only from the load balancer and control plane.

VPC Flow Logs: Enabled for auditing all network traffic. If something suspicious happens, we have logs.

Secrets Management

My biggest regret was initially storing secrets in ConfigMaps (don’t do this!). Now I use Kubernetes Secrets as a baseline, with plans to migrate to AWS Secrets Manager for production.

Best practices I follow:

  • Never commit secrets to Git
  • Rotate database passwords quarterly
  • Use different credentials for each environment
  • Enable encryption at rest for RDS and EBS

Monitoring and Alerts

CloudWatch Alarms for:

  • High CPU/memory usage
  • Pod restart frequency
  • Database connection failures
  • SSL certificate expiration (just in case)

Log Aggregation: Integrated with AWS CloudWatch Logs. All pod logs are automatically shipped to CloudWatch, where I can search, filter, and create metrics.

Disaster Recovery (Hope for the Best, Plan for the Worst)

Database Backups

RDS automated backups run daily with 7-day retention. I also take manual snapshots before major deployments. The peace of mind is worth the few cents of S3 storage.

Persistent Volume Snapshots

Weekly EBS snapshots of all persistent volumes. Automated via AWS Backup. It will save you if you accidentally deleted production uploads (yes, it happens).

Configuration Backups

The entire Kubernetes configuration is in Git. If the cluster explodes, I can recreate everything in an hour. Version control is disaster recovery.

Troubleshooting: The Complete Guide

Let me share the issues I encountered and how I solved them. These are real problems from production.

Issue 1: Pod Stuck in “Pending” Status

Symptoms: Pod shows “Pending” status for more than 2-3 minutes.

Investigation:

kubectl describe pod <pod-name> -n laravel-app

Common Causes:

  1. Insufficient cluster resources: Nodes don’t have enough CPU/memory for the pod’s resource requests.
    • Solution: Scale up the node group or use smaller resource requests
    • Check node resources: kubectl describe nodes
  2. PVC not binding: Persistent volume claim can’t find a volume.
    • Solution: Check if EBS CSI driver is running: kubectl get pods -n kube-system | grep ebs-csi
    • Verify storage class exists: kubectl get storageclass
  3. Image pull errors: Can’t pull image from ECR.
    • Solution: Verify ECR authentication is configured on nodes
    • Check if ECR repository exists and image is present

Issue 2: “CrashLoopBackOff” – The Most Common Error

Symptoms: Pod keeps restarting, shows “CrashLoopBackOff” status.

Investigation:

# Check current logs

kubectl logs <pod-name> -n laravel-app -c laravel

# Check init container logs

kubectl logs <pod-name> -n laravel-app -c laravel-setup

# Check previous container logs (if it crashed)

kubectl logs <pod-name> -n laravel-app -c laravel –previous

Common Causes:

  1. Database connection failed: Can’t reach RDS or wrong credentials.
    • Check security groups allow traffic from EKS to RDS
    • Verify DB_HOST, DB_PORT, DB_USERNAME, DB_PASSWORD
    • Test connection: kubectl exec -it <pod-name> -n laravel-app -c laravel — php artisan tinker then DB::connection()->getPdo();
  2. Missing or invalid APP_KEY: Laravel requires a valid encryption key.
    • Generate new key: php artisan key:generate –show
    • Encode to base64: echo -n “base64:your-key” | base64
    • Update secret and restart deployment
  3. File permission issues: www-data user can’t write to storage directories.
    • The init container should handle this, but check: kubectl exec -it <pod-name> -n laravel-app -c laravel — ls -la /var/www/storage
    • Fix manually: kubectl exec -it <pod-name> -n laravel-app -c laravel — chown -R www-data:www-data /var/www/storage
  4. PHP errors in code: Syntax errors or missing dependencies.
    • Check logs for PHP stack traces
    • Verify Composer dependencies are compatible
    • Test image locally before deploying

Issue 3: “ImagePullBackOff” Error

Symptoms: Pod can’t pull the Docker image from ECR.

Investigation:

kubectl describe pod <pod-name> -n laravel-app

Look for “Failed to pull image” messages.

Common Causes:

  1. ECR authentication expired: Nodes lost authentication to ECR.
    • Solution: Ensure nodes have ECR permissions in their IAM role
    • The AmazonEC2ContainerRegistryReadOnly policy should be attached to the node IAM role
  2. Wrong image URI: Typo in the image name or tag.
    • Verify image exists: aws ecr describe-images –repository-name your-app-name –region ap-south-1
    • Check the URI in deployment matches ECR exactly
  3. Image tag doesn’t exist: Pushed image with different tag.
    • List available tags: aws ecr list-images –repository-name your-app-name –region ap-south-1
    • Update deployment with correct tag or push missing tag

Issue 4: SSL Certificate Stuck in “Pending”

Symptoms: Certificate shows “Ready: False” for more than 10 minutes.

Investigation:

kubectl get certificate -n laravel-app

kubectl describe certificate laravel-tls-cert -n laravel-app

kubectl get certificaterequest -n laravel-app

kubectl describe certificaterequest <name> -n laravel-app

Common Causes:

  1. DNS not resolving: Domain doesn’t point to load balancer.
    • Test: nslookup your-domain.com
    • Wait for DNS propagation (can take up to 48 hours, usually 5-60 minutes)
    • Verify DNS record points to correct load balancer
  2. HTTP-01 challenge failing: Let’s Encrypt can’t reach your domain on port 80.
    • Verify ingress is accessible: curl -I http://your-domain.com
    • Check AWS security groups allow inbound traffic on port 80
    • Check cert-manager logs: kubectl logs -n cert-manager deployment/cert-manager
  3. Rate limiting: Hit Let’s Encrypt rate limits during testing.
    • Solution: Use staging environment for testing
    • Create a ClusterIssuer with staging server: server: https://acme-staging-v02.api.letsencrypt.org/directory
    • Delete existing certificate: kubectl delete certificate laravel-tls-cert -n laravel-app
    • Reapply manifest with staging issuer
  4. ClusterIssuer not created: cert-manager can’t find the issuer.
    • Check: kubectl get clusterissuer
    • If missing, apply the ClusterIssuer section of your manifest

Issue 5: Application Returns 502 Bad Gateway

Symptoms: Nginx returns 502 error when accessing the application.

Investigation:

# Check Nginx logs

kubectl logs <pod-name> -n laravel-app -c nginx

# Check if PHP-FPM is running

kubectl exec -it <pod-name> -n laravel-app -c laravel — ps aux | grep php-fpm

# Test PHP-FPM connectivity

kubectl exec -it <pod-name> -n laravel-app -c nginx — nc -zv 127.0.0.1 9000

Common Causes:

  1. PHP-FPM not running: Container started but PHP-FPM failed.
    • Check PHP-FPM logs: kubectl logs <pod-name> -n laravel-app -c laravel
    • Look for PHP-FPM configuration errors
    • Test config: kubectl exec -it <pod-name> -n laravel-app -c laravel — php-fpm -t
  2. PHP-FPM listening on wrong interface: Nginx can’t reach PHP-FPM.
    • Verify PHP-FPM configuration includes: listen = 0.0.0.0:9000
    • This should be set in the Dockerfile
  3. PHP-FPM crashed: Out of memory or fatal error.
    • Check container resource limits
    • Increase memory limits in deployment
    • Check for memory leaks in application code
  4. Nginx misconfiguration: Wrong fastcgi_pass address.
    • Verify Nginx config has: fastcgi_pass php-fpm; or fastcgi_pass 127.0.0.1:9000;
    • Check upstream block is defined correctly

Issue 6: File Uploads Not Working

Symptoms: Users upload files, but they don’t appear or return errors.

Investigation:

# Check storage permissions

kubectl exec -it <pod-name> -n laravel-app -c laravel — ls -la /var/www/storage/app/public

# Check if volume is mounted

kubectl exec -it <pod-name> -n laravel-app -c laravel — df -h

# Check if symlink exists

kubectl exec -it <pod-name> -n laravel-app -c laravel — ls -la /var/www/public/storage

Common Causes:

  1. Storage symlink missing: public/storage doesn’t point to storage/app/public.
    • Solution: The init container should create this, but verify
    • Create manually: kubectl exec -it <pod-name> -n laravel-app -c laravel — php artisan storage:link
  2. Permission denied: www-data can’t write to storage.
    • Fix permissions: kubectl exec -it <pod-name> -n laravel-app -c laravel — chown -R www-data:www-data /var/www/storage
    • Set mode: kubectl exec -it <pod-name> -n laravel-app -c laravel — chmod -R 775 /var/www/storage
  3. Volume not mounted: PVC not bound or volume not mounted to pod.
    • Check PVC status: kubectl get pvc -n laravel-app
    • Should show “Bound” status
    • Describe PVC: kubectl describe pvc laravel-uploads-pvc -n laravel-app
  4. Nginx upload size limit: File larger than 50MB.
    • Increase in Nginx ConfigMap: client_max_body_size 100M;
    • Update ConfigMap and restart pods

Issue 7: High Memory Usage / OOMKilled

Symptoms: Pods keep restarting with “OOMKilled” status.

Investigation:

# Check resource usage

kubectl top pods -n laravel-app

# Check pod events

kubectl describe pod <pod-name> -n laravel-app

Common Causes:

  1. Memory leak in application: PHP processes consuming too much memory.
    • Solution: Profile application to find memory leaks
    • Common culprits: large arrays in memory, unclosed database connections, circular references
  2. Insufficient memory limits: Normal operation requires more than allocated.

Increase memory limits in deployment:
resources:  limits:    memory: “1Gi”  # Increased from 512Mi

  • Apply updated manifest and restart
  1. Too many concurrent requests: PHP-FPM overwhelmed.
    • Adjust PHP-FPM pm.max_children in Dockerfile
    • Or scale horizontally: kubectl scale deployment laravel -n laravel-app –replicas=3

The Complete Resource Reference

Here’s everything you might need to bookmark:

Official Documentation

  • AWS EKS: https://docs.aws.amazon.com/eks/
  • Kubernetes: https://kubernetes.io/docs/
  • Laravel: https://laravel.com/docs
  • cert-manager: https://cert-manager.io/docs/
  • Nginx Ingress: https://kubernetes.github.io/ingress-nginx/
  • Docker: https://docs.docker.com/
  • Helm: https://helm.sh/docs/

Community Resources

  • Kubernetes Slack: kubernetes.slack.com
  • Laravel Discord: Community support for Laravel questions
  • Stack Overflow: For specific technical issues
  • GitHub Issues: For tool-specific problems

Final Thoughts: The Journey Continues

Deploying Laravel on AWS EKS was one of the most challenging and rewarding projects I’ve undertaken. Six months later, I’m still learning new things every week.

What surprised me most: The ecosystem’s maturity. Almost every problem I encountered had a well-documented solution. The community support is exceptional.

What I wish I knew earlier: Start small, iterate often. My first deployment was overly complex. I tried to implement everything at once. It was overwhelming and led to mistakes.

Better approach: Deploy the basics first (application + database), verify it works, then add features incrementally. Each addition is a learning opportunity.

The most valuable skill: Debugging. You will encounter issues. Learning to effectively read logs, describe resources, and understand Kubernetes events is more valuable than memorizing commands.

Is Kubernetes the right choice for everyone? No. But if you need scalability, high availability, and modern DevOps practices, it’s an excellent option. The learning curve is steep but manageable with patience and persistence.

If you found this helpful, I’d love to hear about your deployment journey. What worked? What didn’t? What did I miss?

Now go build something amazing. Your application deserves enterprise-grade infrastructure, and you have everything you need to make it happen.

Happy deploying! 

Author

Vivek Kumar

Published on: November 19, 2025