This page explains the main public data models. You do not need to memorize every field, but knowing the shape of the model helps when you build or inspect specs from Python.
- Saved File Shape
- NetworkSpec
- TensorSpec and IndexSpec
- EdgeSpec
- HyperedgeSpec
- GroupSpec and CanvasNoteSpec
- Contraction Plans
- Linear Periodic Models
- Grid Periodic Models
- Tree Periodic Models
- Result Models and Enums
- Practical Advice
Saved JSON files use a schema wrapper:
{
"schema_version": 2,
"network": {
"...": "..."
}
}The wrapper lets the package reject unsupported file versions clearly instead
of guessing how to load them. New saves use schema version 2, and schema
version 1 remains loadable for older saved designs.
The in-memory object is a NetworkSpec. Use
tensor_network_editor.io.serialize_spec(...) and
tensor_network_editor.io.deserialize_spec(...) when you need to move between
the object and the schema-wrapped JSON payload. Use load_spec(...) and
save_spec(...) for files.
NetworkSpec is the root object for one abstract tensor-network design.
It stores:
idnametensorsgroupsedgeshyperedgesnotescontraction_planlinear_periodic_chaingrid_periodic_gridtree_periodic_treemetadata
metadata is the place for lightweight annotations. The stable tag convention
is metadata.tags, which should be a small list of strings on network, tensor,
index, edge, group, or note entities.
For tensors and indices, the editor also understands a small guided convention
inside the existing metadata mapping. This is a documented convention, not a
new saved-file schema:
- tensor keys:
role,state,provenance,symmetry - index keys:
leg_kind,symmetry,observable
These values stay free-form text. You can still keep any extra keys you want in
the same metadata object.
lint_spec(...) understands these guided keys more deeply than generic custom
metadata. For example, it can warn about open indices marked as leg_kind="bond"
or conflicting symmetry annotations across connected legs while still keeping
those checks as soft lint findings rather than hard validation errors.
Useful helper methods:
tensor_map(): map tensor ids to tensorsindex_map(): map index ids to their owning tensor and indexconnected_index_ids(): ids of indices used by edges or hyperedgesopen_indices(): tensor/index pairs that are not connected by either kind of connection
Example:
from tensor_network_editor import NetworkSpec
spec = NetworkSpec(id="network_empty", name="empty example")
print(spec.open_indices())TensorSpec represents one tensor node on the canvas. IndexSpec represents
one named index on that tensor.
from tensor_network_editor import CanvasPosition, IndexSpec, TensorSpec
tensor = TensorSpec(
id="tensor_a",
name="A",
position=CanvasPosition(x=120.0, y=160.0),
indices=[
IndexSpec(id="tensor_a_i", name="i", dimension=2),
IndexSpec(id="tensor_a_x", name="x", dimension=3),
],
)
print(tensor.shape)tensor.shape is derived from the dimensions of its indices. In the example
above, it is (2, 3).
Each tensor also stores canvas position, visual size, optional metadata,
optional tensor_data, and optional periodic-mode roles used by the
specialized editors.
tensor_data is the portable place for tensor initializers. It is
not stored in metadata, because it directly affects generated backend code.
Example:
from tensor_network_editor import TensorDataMode, TensorDataSpec
tensor.tensor_data = TensorDataSpec(
mode=TensorDataMode.FILL,
fill_value=0.5,
)Supported tensor-data modes are:
None: no explicit payload, so generated backend code initializes zerosTensorDataMode.ZEROS: explicit zero initializer, useful when a dtype is setTensorDataMode.ONES: initialize the whole tensor with onesTensorDataMode.FILL: repeat one scalar value across the tensor shapeTensorDataMode.IDENTITY: create a square rank-2 identity/delta tensorTensorDataMode.COPY: create a generalized diagonal copy tensor where every axis has the same dimensionTensorDataMode.RANDOM: create deterministic seeded normal or uniform dataTensorDataMode.LITERAL: store nested Python lists of finite real or complex values that exactly matchtensor.shapeTensorDataMode.EXTERNAL: load tensor values from a.npy,.npz, or.ptfile in generated code, with a runtime shape check
TensorDataSpec.dtype can be float32, float64, complex64, or
complex128. Complex scalars are JSON objects such as
{"real": 1.0, "imag": -0.5}.
For external data, file_path is required. .npz files also require
array_key; .pt files may either store a tensor directly or use array_key
to select a tensor from a saved mapping. The optional dtype field asks
generated code to convert the loaded array/tensor.
Serialized tensor-data payloads are small JSON objects:
{"mode": "ones"}
{"mode": "fill", "fill_value": {"real": 1.0, "imag": -0.5}, "dtype": "complex128"}
{"mode": "identity", "dtype": "float64"}
{"mode": "copy"}
{"mode": "random", "seed": 123, "distribution": "uniform", "dtype": "float32"}
{"mode": "literal", "values": [[1.0, 0.0], [0.0, 1.0]]}
{"mode": "external", "file_path": "data/tensor_a.npz", "array_key": "a", "dtype": "float64"}When generated code is written from the CLI, relative external file_path
values are resolved relative to the input JSON file. In the Python API, pass
external_data_base_path=... to generate_code(...) when you want the same
anchoring behavior. Without that argument, the path is emitted exactly as
stored.
Generated hyperedge copy tensors are an implementation detail of exports.
Supported generated Python reloads use structured comments around those copy
tensors to reconstruct the original HyperedgeSpec.
Contraction analysis and benchmark mode use the same idea internally for
normal-mode specs with hyperedges: generated copy tensors appear as synthetic
operands in analysis results, but they are not saved back into the visual
model.
In the editor sidebar, tensor and index properties expose:
Tagsformetadata.tagsSuggested annotationsfor the guided keys aboveCustom metadata (JSON)for the remaining free-form keys
The guided fields and the JSON editor both write into metadata, but the JSON
editor hides the guided keys so you do not edit the same value in two places.
Leaving a guided field empty removes that key from metadata.
EdgeSpec connects two tensor indices. Each side is an EdgeEndpointRef.
from tensor_network_editor import EdgeEndpointRef, EdgeSpec
edge = EdgeSpec(
id="edge_x",
name="bond_x",
left=EdgeEndpointRef(tensor_id="tensor_a", index_id="tensor_a_x"),
right=EdgeEndpointRef(tensor_id="tensor_b", index_id="tensor_b_x"),
)For a valid edge:
- both tensors must exist
- both indices must exist
- each index must belong to the referenced tensor
- connected dimensions must match
HyperedgeSpec connects three or more indices that all share the same
dimension. Its endpoints are also stored as EdgeEndpointRef values.
from tensor_network_editor import CanvasPosition, EdgeEndpointRef, HyperedgeSpec
hyperedge = HyperedgeSpec(
id="hyperedge_shared",
name="shared_bond",
endpoints=[
EdgeEndpointRef(tensor_id="tensor_a", index_id="tensor_a_x"),
EdgeEndpointRef(tensor_id="tensor_b", index_id="tensor_b_x"),
EdgeEndpointRef(tensor_id="tensor_c", index_id="tensor_c_x"),
],
hub_offset=CanvasPosition(x=24.0, y=-12.0),
)hub_offset stores the editor-side visual displacement of the hyperedge hub
relative to the automatic center computed from the endpoints.
CanvasPosition(x=0.0, y=0.0) means "keep the hub centered".
Serialized shape:
{
"id": "hyperedge_shared",
"name": "shared_bond",
"endpoints": [
{"tensor_id": "tensor_a", "index_id": "tensor_a_x"},
{"tensor_id": "tensor_b", "index_id": "tensor_b_x"},
{"tensor_id": "tensor_c", "index_id": "tensor_c_x"}
],
"hub_offset": {"x": 24.0, "y": -12.0},
"metadata": {}
}Older payloads that do not include hub_offset still load and default to
{"x": 0.0, "y": 0.0} for backward compatibility.
For a valid hyperedge:
- it must have at least three endpoints
- every referenced tensor and index must exist
- each index must belong to the referenced tensor
- endpoint indices must be unique inside the hyperedge
- all endpoint dimensions must match
- an index cannot be reused by another edge or hyperedge
In the editor, hyperedges are first-class saved objects in normal mode. The
visible hub is still not a real tensor node; only the relative hub_offset is
saved alongside the endpoints. Generated backend code lowers hyperedges to
autogenerated copy tensors plus binary edges, because the target backends still
consume pairwise connectivity.
Planner and benchmark analysis also lower hyperedges internally and report the
generated operands through ContractionAnalysisResult.synthetic_operands.
GroupSpec is a visual grouping of tensor ids:
from tensor_network_editor import GroupSpec
group = GroupSpec(
id="group_left_block",
name="Left block",
tensor_ids=["tensor_a", "tensor_b"],
)CanvasNoteSpec stores free-form text on the canvas:
from tensor_network_editor import CanvasNoteSpec, CanvasPosition
note = CanvasNoteSpec(
id="note_boundary",
text="Open indices are physical legs.",
position=CanvasPosition(x=80.0, y=60.0),
)Groups and notes do not change the mathematical connectivity. They are there to make larger diagrams easier to understand.
Manual contraction plans are stored with:
ContractionPlanSpecContractionStepSpecContractionOperandLayoutSpecContractionViewSnapshotSpec
Basic example:
from tensor_network_editor import ContractionPlanSpec, ContractionStepSpec
plan = ContractionPlanSpec(
id="plan_manual",
name="Manual path",
steps=[
ContractionStepSpec(
id="step_contract_ab",
left_operand_id="tensor_a",
right_operand_id="tensor_b",
)
],
)A step consumes two operands. Later steps should refer to operands that still exist at that point in the plan.
View snapshots preserve contraction-scene layout state:
ContractionOperandLayoutSpecstores one operand id, position, and sizeContractionViewSnapshotSpecstores operand layouts after a given number of applied steps
Snapshots are mainly for the editor UI. They are still part of the saved design and round-trip through JSON.
When you re-import supported generated Python, manual contraction steps and
current periodic-mode payloads can be recovered, but view_snapshots are reset
because generated code does not carry editor layout state.
Linear periodic mode uses:
LinearPeriodicChainSpecLinearPeriodicCellSpecLinearPeriodicCellNameLinearPeriodicTensorRole
Import these from tensor_network_editor.models when you need them directly.
This mode stores an initial cell, periodic cell, and final cell. Each cell can have tensors, edges, groups, notes, metadata, and its own contraction plan.
Most users can start with normal NetworkSpec fields and only use these models
when working with repeated one-dimensional structures.
Grid periodic mode uses:
GridPeriodicGridSpecLinearPeriodicCellSpecGridPeriodicCellNameGridPeriodicTensorRole
Import these from tensor_network_editor.models when you need them directly.
This mode stores nine representative cells around a center cell:
top_lefttoptop_rightleftcenterrightbottom_leftbottombottom_right
Each cell can store tensors, edges, groups, notes, metadata, and its own
contraction plan. The typed boundary roles (up, right, down, left)
describe how open bonds continue between neighboring cells.
Grid periodic payloads are mainly for repeated two-dimensional structures. Manual plans inside a grid cell can refer to virtual boundary operands as if they were clickable neighbors:
__grid_up____grid_right____grid_down____grid_left__
These operands represent already-built payloads or surviving frontiers, not
physical tensors stored in the cell. Generated code folds the grid in row-major
order: it starts at the upper-left cell, moves left-to-right through each row,
then carries the current partial result into the next row. If the plan leaves
more than one operand alive, the export keeps those values in
remaining_operands instead of forcing a final scalar or tensor.
Hyperedges are also intentionally not stored inside these cells in v1.
Tree periodic mode uses:
TreePeriodicTreeSpecLinearPeriodicCellSpecTreePeriodicCellNameTreePeriodicTensorRole
Import these from tensor_network_editor.models when you need them directly.
This mode stores three representative cells:
root_cellbranch_cellleaf_cell
It also stores a branching_factor and the active representative cell. Parent
and child boundary tensors describe how the local graph continues upward or
downward in the repeated tree.
Tree periodic payloads are for hierarchical repeated structures. Each tree cell can store a manual contraction plan, and those plans can refer to virtual tree boundaries:
__tree_parent____tree_child_<index>__
These operands represent the parent payload or one child payload at the current
cell boundary. Generated code contracts from the leaves toward the root, level
by level. That bottom-up direction keeps the live frontier bounded and lets a
manual plan preserve a partial tree network in remaining_operands whenever
the user intentionally leaves several operands alive.
Hyperedges are also intentionally not stored inside these cells in v1.
Important enums:
EngineName:tensornetwork,quimb,tensorkrowch,einsum_numpy,einsum_torchTensorCollectionFormat:list,matrix,dict
Important result models:
EditorResult: returned by a confirmed editor sessionCodegenResult: returned bygenerate_code(...)ValidationIssue: returned byvalidate_spec(...)LintReport: returned bylint_spec(...)SpecAnalysisReport: returned byanalyze_spec(...); its contraction result includeswarningsandsynthetic_operandswhen analysis lowers hyperedgesSpecDiffResult: returned bydiff_specs(...)SemanticFieldChange,SemanticDiffEntry,SemanticSpecDiffResult: returned bysemantic_diff_specs(...)
Most result models provide to_dict() when they are intended for structured
headless output.
- Keep ids stable if you plan to diff or post-process saved files.
- Use meaningful names because generated code is easier to read.
- Let
save_spec(...)validate before writing JSON. - Use
open_indices()when you want to inspect dangling legs. - Use
metadatafor your own small annotations, not for core connectivity. - Use
tensor_datafor deterministic tensor values that should affect generated backend code. - Prefer
metadata.tagsfor quick labels that you want to reuse in filters. - Prefer guided tensor keys like
role,state,provenance, andsymmetrywhen they fit what you want to describe. - Prefer guided index keys like
leg_kind,symmetry, andobservablefor leg semantics. - Expect
lint_spec(...)to treat those guided keys specially and surface higher-signal modeling warnings when they conflict with the graph structure. - Keep free-form metadata reasonably small so the editor stays responsive.
- Prefer JSON as the long-term design artifact and generated code as the backend-specific artifact.