Quantum Memory-Augmented Neural Networks: Bridging Classical and Quantum Machine Learning
QMANN combines classical neural networks with quantum memory operations to achieve enhanced learning capabilities. This open-source implementation provides practical quantum-inspired algorithms suitable for near-term quantum devices.
QMANN provides three distinct modes for quantum machine learning research and development:
- Purpose: Theoretical analysis and algorithm development
- Capabilities: Unlimited qubits, perfect gates, infinite coherence
- Use Case: Research papers, theoretical bounds, algorithm design
- Cost: Free
- Purpose: Algorithm validation and development
- Capabilities: Up to 20 qubits, noise modeling, quantum-inspired operations
- Use Case: Development, testing, education, reproducible research
- Cost: Free (requires computational resources)
- Purpose: Experimental validation on real quantum hardware
- Capabilities: 4-12 qubits, real noise, actual quantum effects
- Use Case: Proof-of-concept, hardware benchmarking, quantum advantage validation
- Cost: Paid (IBM: ~$0.001/shot, IonQ: ~$0.01/shot)
# Clone the repository
git clone https://github.com/neuraparse/QMANN.git
cd QMANN
# Install dependencies
pip install -e .
# Quick test
make quicktest
# Choose your mode:
python examples/01_theoretical_mode.py # 🔬 FREE - Theoretical analysis
python examples/02_simulation_mode.py # 💻 FREE - Classical simulation
python examples/03_hardware_mode.py # ⚛️ PAID - Real quantum hardware
# Or use Docker
docker build -t qmann .
docker run qmann python examples/02_simulation_mode.py# 🔬 For research papers and theoretical analysis (FREE)
python examples/01_theoretical_mode.py
# 💻 For algorithm development and testing (FREE)
python examples/02_simulation_mode.py
# ⚛️ For real quantum hardware validation (PAID - estimate costs first!)
python scripts/estimate_hardware_costs.py --qubits 6 --shots 1000
python examples/03_hardware_mode.py
⚠️ IMPORTANT: QMANN has three distinct modes. Choose the right one for your needs!
| Mode | Purpose | Cost | Qubits | Use Case |
|---|---|---|---|---|
| 🔬 Theoretical | Algorithm design | FREE | Unlimited | Research papers, bounds |
| 💻 Simulation | Development & testing | FREE | Up to 20 | Validation, education |
| ⚛️ Hardware | Real experiments | PAID | 4-12 | Proof-of-concept, benchmarking |
When to use: Writing papers, theoretical analysis, algorithm design Cost: FREE Resources: Unlimited (ideal quantum computer)
from qmann.config import THEORETICAL_ANALYSIS, validate_experimental_setup
from qmann import QMANN
# Validate theoretical setup
validate_experimental_setup(THEORETICAL_ANALYSIS)
# Create theoretical model (unlimited resources)
model = QMANN(
input_dim=100, # Large input
hidden_dim=512, # Large hidden layer
output_dim=10,
memory_capacity=1024, # Large memory
max_qubits=50 # Many qubits (theoretical)
)
print("Theoretical capacity:", model.get_quantum_info()['theoretical_capacity'])When to use: Algorithm development, testing, education, reproducible research Cost: FREE (requires computational resources) Resources: Up to 20 qubits, noise modeling
from qmann.config import SIMULATION_VALIDATION
from qmann import QMANN, QMANNTrainer
# Validate simulation setup
validate_experimental_setup(SIMULATION_VALIDATION)
# Create simulation model (realistic limits)
model = QMANN(
input_dim=10,
hidden_dim=64,
output_dim=3,
memory_capacity=32, # Limited by simulation
max_qubits=8 # Simulation limit
)
# Generate sample data
X = torch.randn(100, 15, 10)
y = torch.randint(0, 3, (100, 15))
# Train with simulation
trainer = QMANNTrainer(model)
trainer.train_epoch(X, y)
print("Simulation results:", model.get_quantum_info())When to use: Proof-of-concept, hardware benchmarking, quantum advantage validation Cost: PAID (IBM: ~$0.001/shot, IonQ: ~$0.01/shot) Resources: 4-12 qubits, real noise, actual quantum effects
from qmann.hardware import ExperimentalQMANN, QuantumBackendManager
from qmann.config import HARDWARE_PROOF_OF_CONCEPT
# ⚠️ WARNING: This costs real money on quantum hardware!
# Setup quantum backends (requires API credentials)
backend_manager = QuantumBackendManager()
# Validate hardware setup
validate_experimental_setup(HARDWARE_PROOF_OF_CONCEPT)
# Create hardware model (NISQ constraints)
model = ExperimentalQMANN(
input_dim=4, # Small input for hardware
hidden_dim=16, # Small hidden layer
output_dim=2,
n_qubits=6, # NISQ limit
backend_manager=backend_manager
)
# Small test data (hardware is expensive!)
X_test = torch.randn(5, 3, 4) # Only 5 samples!
# Run on real quantum hardware
output, exp_info = model.experimental_forward(
X_test,
backend_name="ibm_brisbane", # Real IBM quantum computer
log_experiment=True
)
print(f"Hardware success: {exp_info['quantum_success']}")
print(f"Backend used: {exp_info['backend_info']['name']}")qmann/
├─ README.md
├─ .github/workflows/ # CI/CD pipelines
│ ├─ ci.yml # Code → linters + unit-tests
│ ├─ arxiv-build.yml # Paper → LaTeX + PDF/A check
│ └─ size-figs.yml # Figure size validation
├─ src/qmann/ # Python package
├─ paper/ # arXiv submission
│ ├─ main.tex
│ ├─ meta.yaml # Metadata for arXiv
│ ├─ sections/ # Paper sections
│ └─ figs/ # Figures and plots
├─ data/ # Datasets and download scripts
├─ circuits/ # QASM quantum circuits
├─ benchmarks/ # Performance benchmarks
├─ scripts/ # Utility scripts
└─ docker/ # Container definitions
QMANN introduces a quantum random access memory (QRAM) backed external memory architecture that enhances neural network learning capabilities through quantum superposition and entanglement.
- Novel QRAM-based memory architecture for neural networks
- Theoretical analysis of quantum memory capacity advantages
- Experimental validation on classical ML benchmarks
- Open-source implementation with full reproducibility
- Memory Capacity: 2^n exponential scaling with n qubits
- Access Complexity: O(log n) logarithmic lookup time
- Gate Operations: Perfect quantum gates with no errors
- Entanglement: Full quantum entanglement advantages
- Scalability: Unlimited qubit count
- MNIST Sequential: 98.6% vs 98.2% classical baseline
- Parameter Efficiency: ~40% fewer parameters than classical models
- Memory Usage: 70% of theoretical quantum capacity achieved
- Scalability: Up to 20 qubits (2^20 = 1M amplitudes)
- Performance: 2-3x speedup over classical memory-augmented networks
- Proof-of-Concept: Successfully demonstrated on IBM Brisbane (6 qubits)
- Noise Resilience: 60-80% fidelity on NISQ devices
- Circuit Depth: Limited to 50 gates due to decoherence
- Cost Efficiency: $0.10-$1.00 per experiment
- Quantum Advantage: Marginal advantage for small problems, promising for scaling
- Theoretical: Assumes perfect quantum computer (not yet available)
- Simulation: Classical simulation of quantum operations (no true quantum advantage)
- Hardware: Limited by current NISQ device capabilities and high error rates
# Core QMANN (simulation only)
pip install qmann
# Or install from source
git clone https://github.com/neuraparse/QMANN.git
cd QMANN
pip install -e .# For real quantum hardware access
pip install qmann[hardware]
# For experimental features
pip install qmann[experimental]
# For everything
pip install qmann[all]- Python: 3.9+
- Core: PyTorch 2.1+, Qiskit 1.0+, PennyLane 0.35+
- Hardware: IBM Quantum account, Google Quantum AI access (optional)
- GPU: CUDA 11.8+ for accelerated simulation (optional)
- CUDA 12.0+ (for GPU acceleration)
# Create virtual environment
python -m venv venv
source venv/bin/activate # or `venv\Scripts\activate` on Windows
# Install dependencies
pip install -r requirements.txt
pip install -e .
# Run tests
pytest tests/# Build image
docker build -t qmann .
# Run experiments
docker run -v $(pwd)/results:/app/results qmann python scripts/run_experiments.py
# Interactive development
docker run -it -v $(pwd):/app qmann bash# Theoretical analysis and bounds
python benchmarks/run.py --mode theoretical --analysis capacity,complexity
# Generate theoretical plots
python benchmarks/theoretical_analysis.py --plot-scaling# Run standard benchmarks (simulation)
python benchmarks/run.py --mode simulation --all
# Compare with classical baselines
python benchmarks/run.py --mode simulation --task mnist_sequential --models qmann,lstm,transformer
# Generate simulation plots
make plot-bench# Step 1: Setup quantum hardware access (requires API credentials)
export IBMQ_TOKEN="your_ibm_token"
export GOOGLE_QUANTUM_PROJECT="your_project_id"
# Step 2: Validate setup with simulators first (FREE)
python examples/hardware_experiments.py --mode simulation --validate-setup
# Step 3: Run small proof-of-concept (COSTS ~$1-5)
python examples/hardware_experiments.py --mode hardware --budget 5.00 --qubits 4
# Step 4: Full hardware benchmark (COSTS ~$10-50)
python examples/hardware_experiments.py --mode hardware --budget 50.00 --qubits 8 --full-benchmark# Estimate costs before running
python scripts/estimate_hardware_costs.py --qubits 6 --shots 1000 --backends ibm,ionq
# Output:
# IBM Brisbane (6 qubits, 1000 shots): $1.00
# IonQ Aria (6 qubits, 1000 shots): $10.00
# Total estimated cost: $11.00| Provider | Device | Qubits | Technology | Status |
|---|---|---|---|---|
| IBM Quantum | Brisbane | 127 | Superconducting | ✅ Supported |
| IBM Quantum | Kyoto | 127 | Superconducting | ✅ Supported |
| Google Quantum AI | Sycamore | 70 | Superconducting | 🔄 In Progress |
| IonQ | Aria | 25 | Trapped Ion | 🔄 In Progress |
| AWS Braket | Various | Varies | Multiple | 📋 Planned |
- Minimum: 4-6 qubits for basic experiments
- Recommended: 8-12 qubits for meaningful results
- Future: 50+ qubits for quantum advantage demonstrations
cd paper/
make pdf # Compile LaTeX
make check-pdfa # Verify PDF/A compliance
make submit-arxiv # Prepare arXiv submissionAll results can be reproduced using:
scripts/reproduce.sh # Full reproduction pipelineThis script:
- Downloads required datasets
- Runs all experiments
- Generates figures and tables
- Validates results against published values
from benchmarks.assetops_adapter import AssetOpsBenchmarkAdapter
# Initialize AssetOps adapter
adapter = AssetOpsBenchmarkAdapter()
# Available industry scenarios
tasks = adapter.list_available_tasks()
# ['predictive_maintenance', 'energy_optimization', 'supply_chain_anomaly',
# 'quality_control', 'patient_monitoring']
# Generate industry-specific dataset
X, y = adapter.generate_synthetic_dataset('predictive_maintenance', n_samples=1000)
# Train QMANN on industrial data
model = QMANN(input_dim=X.shape[-1], hidden_dim=64, output_dim=3)
trainer = QMANNTrainer(model)
trainer.train_epoch(X, y)from qmann.telemetry import AgentOpsIntegration
# Initialize telemetry (requires AgentOps API key)
telemetry = AgentOpsIntegration(api_key="your_agentops_key")
# Automatic metrics collection during QMANN operations
model = QMANN(input_dim=10, hidden_dim=64, output_dim=3)
output, memory_info = model(X) # Automatically logged
# View real-time statistics
stats = telemetry.get_real_time_stats()
print(f"Memory hit ratio: {stats['memory_hit_ratio']:.2%}")
print(f"Total cost: ${stats['total_cost_usd']:.2f}")from qmann.utils.watermark import prepare_dataset_with_watermark, verify_dataset_watermark
# Protect your dataset with quantum watermarking
watermarked_data, metadata = prepare_dataset_with_watermark(
dataset=X_train,
watermark_id="my_proprietary_dataset"
)
# Train on watermarked data (preserves utility)
model.train(watermarked_data, y_train)
# Later: verify watermark integrity
verification = verify_dataset_watermark(suspicious_data, metadata)
print(f"Watermark detected: {verification['watermark_detected']}")
print(f"Confidence: {verification['confidence']:.2%}")- Realistic error modeling for NISQ devices
- Shot cost optimization and budget management
- Automatic fallback to classical simulation
- Surface code error correction integration
- Prometheus metrics for monitoring
- AgentOps integration for observability
- Real-time performance tracking
- Cost and resource optimization
- AssetOps benchmark suite integration
- Manufacturing, energy, healthcare scenarios
- Automated performance evaluation
- Quantum advantage validation
- Quantum-inspired watermarking
- Dataset integrity verification
- Intellectual property protection
- Regulatory compliance support
- Automated testing and validation
- Multi-platform compatibility
- Performance benchmarking
- Security scanning
If you use this work, please cite:
@article{eker2025qmann,
title={Quantum Memory-Augmented Neural Networks: Bridging Quantum Computing and Machine Learning},
author={Eker, Bayram and others},
journal={arXiv preprint arXiv:2025.XXXXX},
year={2025},
note={Available at: https://github.com/neuraparse/QMANN}
}This project uses Dual Licensing with patent protection:
- Non-commercial research and education only
- No patent filing rights
- Must cite Neura Parse in publications
- See LICENSE for full terms
- Required for any commercial use
- Includes patent licensing
- Contact: info@neuraparse.com
- Paper: CC BY 4.0 (with patent reservations)
- Data: CC0 1.0 (where applicable)
- Core technologies are patent-protected
- See PATENTS.md for details
- Patent licensing: info@neuraparse.com
See CONTRIBUTING.md for guidelines.
- Organization: Neura Parse
- Email: info@neuraparse.com
- GitHub: @neuraparse
- Lead Researcher: Bayram Eker - ORCID
- Issues: GitHub Issues
- Discussions: GitHub Discussions
This work was developed by Neura Parse research team. We thank the quantum computing community for their valuable feedback and the open-source contributors who made this project possible.
Special thanks to:
- IBM Quantum Network for quantum hardware access
- Google Quantum AI for research collaboration
- The Qiskit and PennyLane development teams
- arXiv.org for open science publishing
Status: 🚧 Under Development | Version: 1.0.0-alpha | Last Updated: July 2025