Skip to content

im-hashim/loquito

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

30 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Loquito: End-to-End Autonomous Driving Model

Loquito is an interpretable, end-to-end autonomous driving model that predicts future waypoints and task actions (steering and stopping) from sequences of multi-camera RGB images. It combines a ResNet-based visual encoder, attention-based feature pooling, and temporal GRUs to learn driving behavior from data.

Important

This repository is open-sourced and maintained by the Institute for Automotive Engineering (ika) at RWTH Aachen University.
End-to-End Automated Driving is one of many research topics within our Vehicle Intelligence & Automated Driving domain.
If you would like to learn more about how we can support your advanced driver assistance and automated driving efforts, feel free to reach out to us!
πŸ“§ opensource@ika.rwth-aachen.de

loquito-teaser.mp4

πŸ”§ Features

  • Attention-based visual encoder using ResNet backbones with learnable spatial pooling.
  • Multi-camera input support: Processes synchronized image sequences from 4 viewpoints.
  • Waypoint prediction: Outputs relative spatial displacements for future trajectory planning.
  • Task prediction: Predicts binary stop signals and continuous steering angles.
  • MLflow integration: Tracks all experiments and metrics.
  • TorchScript export: Easily export models for deployment.
  • Dockerized: Fully containerized environment for training and MLflow server.

A detailed description of Loquito’s architecture, attention-based explainability, and embedding space is available in the overview document:

➑️ Loquito Overview

πŸ—‚οΈ Project Structure

loquito/
β”œβ”€β”€ configs/                # YAML configs for training and model variants
β”œβ”€β”€ docker/                 # Dockerfiles and docker compose setup
β”œβ”€β”€ eval/                   # Evaluation Tools
β”œβ”€β”€ loquito/                # Core library
β”‚   β”œβ”€β”€ data/               # Dataset indexing & downloading
β”‚   β”œβ”€β”€ lib/                # Utilities and coordinate transforms
β”‚   β”œβ”€β”€ models/             # Loquito model definition
β”‚   └── training/           # Trainer, dataloader, and loss functions
β”œβ”€β”€ scripts/                # Training and export scripts

πŸš€ Quick Start

1. Clone the Repository

git clone --recurse-submodules https://github.com/ika-rwth-aachen/loquito.git
cd loquito

2. Configure Environment

Loquito requires the LOQUITO_DATA_DIR environment variable to be set. This directory will host the dataset, models, and MLflow artifacts.

The path below is just an example. Please adjust it to the desired storage location on your system.

export LOQUITO_DATA_DIR=${HOME}/workspace/data

3. Setup Dataset

You can use your own dataset or download the LMDrive dataset, which was used for training the provided pretrained models. To download the LMDrive dataset, run the following commands:

python3 loquito/data/dataset_downloader.py --num_workers 8

4. Start MLflow Server

docker compose -f docker/docker-compose.yaml up -d loquito-mlflow

Access MLflow UI at http://localhost:5000

ℹ️ Configure MLflow Connection (Optional)

By default, the training config uses mlflow_tracking_uri: "http://loquito-mlflow:5000". This works out-of-the-box because Docker's internal DNS resolves loquito-mlflow to the correct container IP within the shared network.

If you encounter networking issues, want to connect via the host IP, or use a remote MLflow server, you can change the URI in configs/train.yaml:

# configs/train.yaml
# Default (Docker DNS):
mlflow_tracking_uri: "http://mlflow:5000"
# Alternative (Host IP or Remote Server):
# mlflow_tracking_uri: "http://192.168.1.50:5000" 
# mlflow_tracking_uri: "http://my-remote-mlflow.com:5000" 

5. Train the Model

Start Training:

Start the docker container

docker compose -f docker/docker-compose.yaml run --name loquito_train_1 loquito-train

Then within the docker container, first verify the MLflow connection:

python scripts/tests/test_mlflow_connection.py

If the connection is successful, train the model by choosing a model config (e.g., loquito_v1s.yaml) and run:

python scripts/train.py --config configs/models/loquito_v1s.yaml --device 0

Training logs and saved checkpoints will appear under data/models/<model_name>/.

6. Export to TorchScript

python scripts/export.py --model-path results/loquito_v1s/<timestamp>/model_epoch_5.pth

🧠 Model Variants

Config File Backbone Embed Dim Image Size
loquito_v1s.yaml ResNet18 256 288Γ—384
loquito_v1s_hd.yaml ResNet18 256 576Γ—768
loquito_v1m.yaml ResNet34 256 288Γ—384
loquito_v1m_hd.yaml ResNet34 256 576Γ—768
loquito_v1l.yaml ResNet50 1024 288Γ—384
loquito_v1l_hd.yaml ResNet50 1024 576Γ—768

Each variant balances trade-offs between model size, resolution, and performance.

πŸ“Š Loss Components

Name Description
waypoint L2 loss on predicted vs. ground-truth waypoints
perpendicular Distance to expert trajectory
smoothness Penalizes sudden changes between predicted points
stop Binary cross-entropy for stop signal
steer L1 loss for steering angle
embedding Cosine similarity to future embeddings

Weights are configurable in the YAML config.

πŸ§ͺ Logging with MLflow

  • Automatically logs hyperparameters and all loss metrics.
  • Models saved after every epoch.
  • Set tracking URI in configs/train.yaml.

🐳 Docker Support

πŸ§ͺ Pretrained Model & Benchmark Performance

A pretrained Loquito model is available, trained with the LMDrive dataset and evaluated on the Longest6 benchmark.

πŸ“₯ Model Weights

Download the trained model checkpoints:

➑️ Loquito Pretrained Weights

These weights can be used for further evaluation, inference, or fine-tuning.

The model used for the evaluation and driving videos is loquito_v1m_hd (Epoch 3).

πŸŽ₯ Driving Videos

Watch Loquito drive autonomously in the Longest6 benchmark:

➑️ YouTube Playlist: Longest6 Driving Results

🎬 Each video shows the model completing a different route from the 36 test routes in the benchmark.

πŸ“Š Longest6 Benchmark Results

Closed-loop evaluation on Longest6. All infractions are reported per kilometer driven.

Metric Value
Driving Score (DS) 29.78
Route Completion (RC) 56.40β€―%
Infraction Score (IS) 61.40β€―%
Collisions with pedestrians 0.063 / km
Collisions with vehicles 3.176 / km
Collisions with layout 0.346 / km
Red light infractions 0.126 / km
Stop sign infractions 0.220 / km
Off-road infractions 0.126 / km
Route deviations 0.000 / km
Route timeouts 0.063 / km
Agent blocked 0.692 / km

πŸ“¦ Evaluation Scripts (eval)

The eval/ folder contains a collection of scripts and notebooks for evaluating, analyzing, and visualizing Loquito model results. Note: Most of these scripts are provided for completeness and may not represent production-quality code.

These scripts support both closed-loop (simulated driving) and open-loop (offline dataset) evaluation. Use them to generate plots, videos, and interactive visualizations for reporting and analysis.

Folder Structure

eval/
β”œβ”€β”€ closed_loop
β”‚   β”œβ”€β”€ carla_garage/                        # Submodule: Carla simulation & leaderboard tools (with Loquito integration)
β”‚   β”‚   β”œβ”€β”€ leaderboard/scripts/
β”‚   β”‚   β”‚   └── local_evaluation_loquito.sh  # Script to run the evaluation
β”‚   β”‚   └── team_code_loquito/
β”‚   β”‚       β”œβ”€β”€ loquito_agent.py             # Custom agent logic wrapping the model
β”‚   β”‚       └── navigator.py                 # Route navigation and target extraction
β”‚   └── visualization
β”‚       β”œβ”€β”€ combine_images.py                # Combine images for qualitative review
β”‚       β”œβ”€β”€ create_video.py                  # Create videos from image sequences
β”‚       └── create_viz.py                    # Visualize outputs: attention maps, trajectories, embeddings
└── open_loop
    β”œβ”€β”€ explainability
    β”‚   β”œβ”€β”€ eval_xai.py                      # Explainability analysis (PCA, feature importance, etc.)
    β”‚   └── imagesearch.ipynb                # Notebook for image-based search and explainability
    β”œβ”€β”€ statistics
    β”‚   β”œβ”€β”€ analyse_model_stats.py           # Statistical analysis and plotting
    β”‚   β”œβ”€β”€ combine_csvs.py                  # Aggregate CSV result files
    β”‚   └── create_statistics.py             # Compute evaluation metrics from predictions
    └── visualizer
        └── loquito_visualizing.ipynb        # Interactive visualization of model outputs

Closed-Loop Testing with Carla Garage

The closed_loop/carla_garage submodule integrates the carla_garage repository for closed-loop testing in the CARLA simulator. This version includes:

  • Loquito Agent (loquito_agent.py): Custom agent for leaderboard evaluation.
  • Navigator (navigator.py): Access to the complete route for advanced planning.
  • EgoLocation Pseudosensor: Provides precise ego-vehicle localization.
  • Enhanced Logging: Logs BEV and game camera views, infractions, and driving scores.

To run a local evaluation, use the provided script local_evaluation_loquito.sh. This setup enables realistic, interactive evaluation of Loquito in simulated urban driving scenarios.

Acknowledgements

This repository was developed as part of Lars Ippen's master's thesis in cooperation between the Institute for Automotive Engineering (ika) at RWTH Aachen University the Chair of Integrated Digital Systems and Circuit design at RWTH Aachen University.

The work is accomplished within the project autotech.agil (FKZ 01IS22088A). We acknowledge the financial support for the project by the Federal Ministry of Education and Research of Germany (BMBF) πŸ‡©πŸ‡ͺ.

About

Loquito is an interpretable, end-to-end autonomous driving model that predicts future waypoints and task actions (steering and stopping) from sequences of multi-camera RGB images.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Jupyter Notebook 98.7%
  • Python 1.3%