diff --git a/AGENTS.md b/AGENTS.md
index dda7689..225d223 100644
--- a/AGENTS.md
+++ b/AGENTS.md
@@ -18,24 +18,27 @@ The library requires a running SQL Server instance for its integration tests.
```
lib/
- ex_sql_client.ex # Public API – the main entry point
+ ex_sql_client.ex # Public API – start_link/query/prepare/transaction
ex_sql_client/
- connection.ex # DBConnection behaviour implementation
- protocol.ex # Netler RPC call helpers
- query.ex # Query struct
- result.ex # Result struct
+ protocol.ex # DBConnection behaviour implementation (Netler RPC)
+ query.ex # Query struct (statement + statement_id)
+ ecto.ex # Ecto 3 adapter entry point (use Ecto.Adapters.SQL)
+ ecto/
+ connection.ex # Ecto.Adapters.SQL.Connection: SQL generation + result normalisation
dotnet/dotnet_sql_client/
DotnetSqlClient.csproj # .NET 8 project file
Program.cs # Netler.NET server bootstrap
SqlAdapter.cs # SQL Server operations via Microsoft.Data.SqlClient
test/
- query_test.exs
- transaction_test.exs
- prepared_statement_test.exs
- data_type_test.exs
- test_helper.exs
+ query_test.exs # Integration: raw query tests
+ transaction_test.exs # Integration: transaction tests
+ prepared_statement_test.exs # Integration: prepared statement tests
+ data_type_test.exs # Integration: type mapping tests
+ test_helper.exs # Testcontainers setup and connection string injection
+ ecto/
+ query_test.exs # Unit: SQL generation tests (no DB required)
+ ecto_adapter_test.exs # Integration: end-to-end Ecto adapter tests
mix.exs # Build config and project metadata
-docker-compose.yml # Local SQL Server for integration tests
.github/workflows/ # GitHub Actions CI (build + test + publish)
```
@@ -60,18 +63,30 @@ in `dotnet/dotnet_sql_client/` and places the binary in `priv/`.
## Testing
-Integration tests spin up a SQL Server container automatically via
-[Testcontainers](https://hex.pm/packages/testcontainers). Docker (or a
-compatible runtime) must be available on the machine.
+There are two categories of tests:
+
+**Unit tests** (no database required) — SQL generation tests for the Ecto adapter:
+
+```bash
+mix test test/ecto/query_test.exs
+```
+
+**Integration tests** spin up a SQL Server container automatically via
+[Testcontainers](https://hex.pm/packages/testcontainers). Docker or a
+compatible rootless runtime (e.g. Podman) must be available.
```bash
-mix test --only integration
+# Core driver integration tests
+mix test --include integration
+
+# All Ecto adapter tests (unit + integration)
+mix test test/ecto/ --include integration
```
The container is started once in `test/test_helper.exs` and the connection
-string is shared with all test modules via `Application.put_env`. All tests are
-tagged with `@tag :integration`. There are no unit tests that run without a
-live database.
+string is shared with all test modules via `Application.put_env`. Integration
+tests are tagged with `@tag :integration` and are excluded by default; pass
+`--include integration` to run them.
---
@@ -101,25 +116,43 @@ live database.
## Architecture notes
-The call chain for a query is:
+The call chain for a raw (`ExSqlClient`) query is:
```
Elixir caller
- → ExSqlClient (DBConnection behaviour)
- → Netler RPC (TCP socket, port process)
- → Program.cs (Netler.NET server)
- → SqlAdapter.cs
- → Microsoft.Data.SqlClient
- → SQL Server
+ → ExSqlClient (public API)
+ → ExSqlClient.Protocol (DBConnection behaviour)
+ → Netler RPC (TCP socket, port process)
+ → Program.cs (Netler.NET server)
+ → SqlAdapter.cs
+ → Microsoft.Data.SqlClient
+ → SQL Server
+```
+
+When using the Ecto adapter, an additional layer sits in front:
+
+```
+Ecto / Repo
+ → ExSqlClient.Ecto (Ecto.Adapters.SQL)
+ → ExSqlClient.Ecto.Connection (SQL generation, result normalisation)
+ → ExSqlClient.Protocol (DBConnection behaviour)
+ → … (same chain as above)
```
Key invariants:
- The .NET process is started by Netler as a port. It listens on a TCP port
passed as `args[0]`; the Elixir PID is passed as `args[1]`.
- All route names in `Program.cs` must be registered and match what the
- Elixir layer calls via Netler.
+ Elixir layer calls via `Netler.Client.invoke/3`.
- Connection and transaction state is held in `SqlAdapter` — one instance per
connection process.
+- The Ecto adapter generates MSSQL-dialect SQL: bracket identifiers `[name]`,
+ `TOP(n)` for limits without offset, `OFFSET … FETCH NEXT … ROWS ONLY` for
+ pagination, and `OUTPUT INSERTED/DELETED` for `RETURNING`.
+- Netler/MessagePack deserialises result rows as Elixir `Map`, which sorts
+ string keys alphabetically. `ExSqlClient.Ecto.Connection` recovers the
+ correct column order by parsing the SELECT projection or OUTPUT clause from
+ the SQL string before returning results to Ecto.
---
@@ -130,21 +163,31 @@ Good first contributions:
- Adding `@spec` / `@type` annotations to the Elixir modules.
- Improving error propagation from the .NET layer to Elixir.
- Updating dependencies as new versions are released.
+- Expanding `test/ecto/query_test.exs` with additional SQL generation cases.
Areas requiring extra care:
- Anything touching `Program.cs` or `SqlAdapter.cs` — changes must compile
with the pinned .NET version and be verified against a live SQL Server.
+- `SqlAdapter.cs` DML result handling — `ExecuteReader` is used for all
+ statements so that `OUTPUT` clauses are supported; `RecordsAffected` is
+ captured inside the `using` block and injected as a synthetic
+ `__rows_affected__` row for DML without an `OUTPUT` clause.
- Netler version upgrades — the RPC protocol may change between major versions;
always check `Program.cs` against the new Netler.NET API.
- `DBConnection` behaviour callbacks — maintain compatibility with the
`db_connection` contract.
+- `ExSqlClient.Ecto.Connection` — column-order recovery relies on regex parsing
+ of the generated SQL; changes to the SQL generator must keep
+ `column_order_from_sql/1` in sync.
---
## Pull request checklist
- [ ] `mix compile --warnings-as-errors` passes.
-- [ ] `mix test --only integration` passes against a local SQL Server.
+- [ ] `mix test test/ecto/query_test.exs` passes (no DB needed).
+- [ ] `mix test --include integration` passes against a local SQL Server.
+- [ ] `mix test test/ecto/ --include integration` passes for Ecto adapter changes.
- [ ] `mix format --check-formatted` passes.
- [ ] `mix credo` passes.
- [ ] New public functions have `@doc` and `@spec`.
@@ -157,5 +200,11 @@ Areas requiring extra care:
- Windows CI — the SQL Server Docker container is Linux-only in GitHub Actions;
the library itself is cross-platform but CI runs on Linux.
- Supporting databases other than Microsoft SQL Server.
+- Ecto migrations / DDL — `execute_ddl/1` raises intentionally; use
+ `ExSqlClient.query/3` directly for schema changes.
+- `Repo.stream/2` and cursor-based fetching — the Netler/C# layer does not
+ implement server-side cursors.
+- Multiple result sets via the Ecto adapter (`query_many/4` raises); use the
+ raw `ExSqlClient` API if you need multiple result sets.
- Changing the `DBConnection` protocol — maintain compatibility with standard
- Elixir database tooling (Ecto, etc.).
+ Elixir database tooling.
diff --git a/README.md b/README.md
index 446ae51..5ae10f5 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
-[](https://travis-ci.com/svan-jansson/ex_sql_client)
+[](https://github.com/svan-jansson/ex_sql_client/actions/workflows/build-test-publish.yml)
[](https://hex.pm/packages/ex_sql_client)
[](https://hex.pm/packages/ex_sql_client)
@@ -18,7 +18,7 @@ Microsoft SQL Server driver for Elixir based on [Netler](https://github.com/svan
## Checklist
-- ☑ Support encrypted connections
+- ☑ Support encrypted connections
- ☑ Support multiple result sets
- ☑ Implement the `DbConnection` behaviour
- ☑ Connect
@@ -27,19 +27,213 @@ Microsoft SQL Server driver for Elixir based on [Netler](https://github.com/svan
- ☑ Transactions
- ☑ Prepared Statements
- ☑ Release first version on hex.pm
-- ☐ Provide an `Ecto.Adapter` that is compatible with Ecto 3
+- ☑ Provide an `Ecto.Adapter` that is compatible with Ecto 3
-## Code Examples
+## Installation
-### Connecting to a Server and Executing a Query
+Add `ex_sql_client` to your dependencies in `mix.exs`:
+
+```elixir
+def deps do
+ [
+ {:ex_sql_client, "~> 0.4"}
+ ]
+end
+```
+
+To use the Ecto adapter, also add `ecto` and `ecto_sql`:
+
+```elixir
+def deps do
+ [
+ {:ex_sql_client, "~> 0.4"},
+ {:ecto, "~> 3.10"},
+ {:ecto_sql, "~> 3.10"}
+ ]
+end
+```
+
+---
+
+## Using ExSqlClient Directly
+
+Use this approach when you want low-level access to SQL Server without Ecto, or when you need to run raw DDL, stored procedures, or arbitrary queries.
+
+### Connecting
+
+Start a connection using a standard ADO.NET connection string:
```elixir
{:ok, conn} =
- ExSqlClient.start_link(
- connection_string:
- "Server=myServerAddress;Database=myDataBase;User Id=myUsername;Password=myPassword;"
- )
+ ExSqlClient.start_link(
+ connection_string:
+ "Server=myServerAddress;Database=myDataBase;User Id=myUsername;Password=myPassword;"
+ )
+```
+
+### Executing Queries
+
+Pass parameters as a map with string keys. Use `@paramName` placeholders in your SQL:
+
+```elixir
+{:ok, rows} =
+ ExSqlClient.query(conn, "SELECT * FROM [records] WHERE [status] = @status", %{status: 1})
+
+# rows is a list of maps, one map per row, with string column names as keys
+# e.g. [%{"id" => 1, "status" => 1, "name" => "foo"}, ...]
+```
+
+Queries with no parameters:
+
+```elixir
+{:ok, rows} = ExSqlClient.query(conn, "SELECT @@VERSION", %{})
+```
+
+### Transactions
+
+```elixir
+DBConnection.transaction(conn, fn conn ->
+ {:ok, _} = ExSqlClient.query(conn, "INSERT INTO [orders] ([ref]) VALUES (@ref)", %{ref: "ORD-1"})
+ {:ok, _} = ExSqlClient.query(conn, "UPDATE [stock] SET [qty] = [qty] - 1 WHERE [id] = @id", %{id: 42})
+end)
+```
+
+### Prepared Statements
+
+```elixir
+query = %ExSqlClient.Query{statement: "SELECT * FROM [users] WHERE [email] = @email"}
+
+{:ok, query} = DBConnection.prepare(conn, query)
+{:ok, rows} = DBConnection.execute(conn, query, %{email: "user@example.com"})
+:ok = DBConnection.close(conn, query)
+```
+
+---
+
+## Using the Ecto Adapter
+
+`ExSqlClient.Ecto` is a full `Ecto.Adapters.SQL` adapter for Microsoft SQL Server. It generates MSSQL-dialect SQL (bracket identifiers, `TOP(n)`, `OFFSET…FETCH`, `OUTPUT INSERTED/DELETED` for returning) and maps Ecto types to SQL Server column types.
+
+### Setting Up a Repo
+
+```elixir
+defmodule MyApp.Repo do
+ use Ecto.Repo,
+ otp_app: :my_app,
+ adapter: ExSqlClient.Ecto
+end
+```
+
+### Configuration
+
+```elixir
+# config/config.exs
+config :my_app, MyApp.Repo,
+ connection_string:
+ "Server=tcp:db.example.com,1433;Database=mydb;User Id=myapp_user;Password=secret;Encrypt=True"
+```
+
+Add the repo to your application's supervision tree:
+
+```elixir
+def start(_type, _args) do
+ children = [
+ MyApp.Repo
+ ]
+ Supervisor.start_link(children, strategy: :one_for_one)
+end
+```
+
+### Schema Example
+
+```elixir
+defmodule MyApp.User do
+ use Ecto.Schema
+
+ schema "users" do
+ field :name, :string
+ field :email, :string
+ field :active, :boolean, default: true
+ timestamps()
+ end
+end
+```
+
+### Query Examples
+
+```elixir
+# Fetch all active users
+MyApp.Repo.all(from u in MyApp.User, where: u.active == true)
+
+# Insert a record and return it
+{:ok, user} = MyApp.Repo.insert(%MyApp.User{name: "Alice", email: "alice@example.com"})
+
+# Update
+MyApp.Repo.update_all(from(u in MyApp.User, where: u.active == false), set: [name: "Deactivated"])
+
+# Delete
+MyApp.Repo.delete_all(from u in MyApp.User, where: u.email == ^"old@example.com")
+
+# Raw SQL via the Ecto adapter
+{:ok, result} = MyApp.Repo.query("SELECT @@VERSION")
+```
+
+### Known Limitations
+
+| Feature | Status |
+|---|---|
+| Migrations / DDL | Not supported — use `ExSqlClient.query/3` directly for DDL |
+| `Repo.stream/2` | Raises at runtime — cursors are not supported by the protocol |
+| `query_many/4` | Raises at runtime — multiple result sets are not supported |
+| `on_conflict` | Only `:raise` is supported |
+| Window functions | Not supported |
+| Materialized CTEs | Not supported |
+| `DISTINCT` on multiple columns | Not supported; use `distinct: true` for a distinct result set |
+| Aggregate filters (`filter/2`) | Not supported |
+| `json_extract_path` | Not supported; use `fragment/1` with `JSON_VALUE`/`JSON_QUERY` instead |
+| `OFFSET` without `ORDER BY` | Raises at compile time — SQL Server requires `ORDER BY` when using `OFFSET` |
+| `OFFSET` without `LIMIT` | Raises at compile time |
+
+---
+
+## Performance
+
+ExSqlClient uses [Netler](https://github.com/svan-jansson/netler) to communicate with a .NET worker process over a local TCP socket using MessagePack serialisation. Every query involves at least one Elixir → .NET → SQL Server → .NET → Elixir round-trip.
+
+### Benchmark highlights
+
+Measured with `mix run bench/benchmarks.exs` against SQL Server 2022 in a local container. Single-process, `pool_size: 5` for query scenarios, `pool_size: 1` for prepared statements. Machine: Intel Core Ultra 9 285H, Elixir 1.19.5, Erlang/OTP 28.
+
+| Scenario | Median latency | Throughput |
+|---|---|---|
+| Netler IPC only (no SQL) | 0.022 ms | ~26 000 req/s |
+| SELECT constant (`SELECT 1`) | 0.95 ms | ~900 req/s |
+| SELECT 1 row | 0.94 ms | ~900 req/s |
+| SELECT 1 row, parameterised | 1.13 ms | ~760 req/s |
+| SELECT 10 rows | 1.17 ms | ~770 req/s |
+| SELECT 100 rows | 1.37 ms | ~670 req/s |
+| Prepared statement (SELECT 1 row) | 0.40 ms | ~1 400 req/s |
+| INSERT | 5.34 ms | ~180 req/s |
+| Transaction (INSERT + commit) | 6.38 ms | ~150 req/s |
+
+### What the numbers mean
+
+**Netler IPC overhead is negligible.** The raw IPC round-trip (no SQL) costs ~0.02 ms. The ~1 ms you see on a simple SELECT is almost entirely SQL Server query execution and ADO.NET overhead — not the Elixir↔.NET transport.
+
+**Prepared statements halve read latency.** Reusing a prepared statement drops median latency from ~0.94 ms to ~0.40 ms by skipping the SQL Server parse/compile step on repeated identical queries.
+
+**Row count has modest impact on reads.** Fetching 100 rows takes ~1.37 ms vs ~0.94 ms for 1 row — the extra 0.4 ms is serialisation and transfer of the additional data.
+
+**Write operations are slower due to SQL Server I/O.** An INSERT takes ~5.3 ms; wrapping it in an explicit transaction adds ~1 ms for the `BEGIN`/`COMMIT` round-trips.
+
+**Throughput scales with pool size.** The figures above are for a single Elixir process. With a larger `pool_size` and concurrent callers, total throughput grows proportionally up to the SQL Server's own limits.
+
+### Running the benchmarks yourself
+
+```bash
+# Uses Testcontainers to spin up SQL Server automatically
+mix run bench/benchmarks.exs
-{:ok, response} =
- ExSqlClient.query(conn, "SELECT * FROM [records] WHERE [status]=@status", %{status: 1})
+# Or point at an existing SQL Server instance
+MSSQL_CONNECTION_STRING="Server=...;..." mix run bench/benchmarks.exs
```
diff --git a/dotnet/dotnet_sql_client/SqlAdapter.cs b/dotnet/dotnet_sql_client/SqlAdapter.cs
index e84a317..f989a86 100644
--- a/dotnet/dotnet_sql_client/SqlAdapter.cs
+++ b/dotnet/dotnet_sql_client/SqlAdapter.cs
@@ -159,6 +159,7 @@ private List> ExecuteStatement(string sql, IDictiona
}
}
+ int recordsAffected;
using (var reader = command.ExecuteReader())
{
do
@@ -176,6 +177,20 @@ private List> ExecuteStatement(string sql, IDictiona
results.Add(row);
}
} while (reader.NextResult());
+
+ // Capture before Dispose(); SELECT returns -1, DML returns >= 0.
+ recordsAffected = reader.RecordsAffected;
+ }
+
+ // For DML without an OUTPUT clause (UPDATE/DELETE/INSERT without RETURNING)
+ // no result rows come back, but Ecto needs the affected-row count to detect
+ // stale entries and return correct counts from update_all/delete_all.
+ if (results.Count == 0 && recordsAffected >= 0)
+ {
+ results.Add(new Dictionary
+ {
+ ["__rows_affected__"] = (long)recordsAffected
+ });
}
}
finally
diff --git a/lib/ex_sql_client/ecto.ex b/lib/ex_sql_client/ecto.ex
new file mode 100644
index 0000000..ac1367c
--- /dev/null
+++ b/lib/ex_sql_client/ecto.ex
@@ -0,0 +1,81 @@
+if Code.ensure_loaded?(Ecto.Adapters.SQL) do
+defmodule ExSqlClient.Ecto do
+ @moduledoc """
+ Ecto 3 adapter for ExSqlClient (Microsoft SQL Server).
+
+ This adapter implements `Ecto.Adapters.SQL` on top of the existing
+ `ExSqlClient.Protocol` / `DBConnection` stack.
+
+ ## Usage
+
+ defmodule MyApp.Repo do
+ use Ecto.Repo,
+ otp_app: :my_app,
+ adapter: ExSqlClient.Ecto
+ end
+
+ ## Configuration
+
+ Pass the same connection options as `ExSqlClient.start_link/1`:
+
+ config :my_app, MyApp.Repo,
+ connection_string: "Server=localhost,1433;Database=mydb;User Id=sa;Password=secret;TrustServerCertificate=True"
+
+ ## Known Limitations
+
+ * **`Repo.stream/2`** — raises at runtime; cursors are not supported by the
+ underlying protocol.
+ * **`query_many/4`** — raises at runtime; multiple result sets are not
+ supported.
+ * **Migrations** — not supported; use `ExSqlClient.query/4` for DDL.
+ * **`on_conflict`** — only `:raise` is supported.
+ * **Window functions** — not supported.
+ """
+
+ use Ecto.Adapters.SQL, driver: :ex_sql_client
+
+ @impl Ecto.Adapter
+ def ensure_all_started(_config, type) do
+ # The default implementation tries to start :ex_sql_client as an OTP
+ # application which would attempt to launch the .NET process before any
+ # connection options are known. We start only the runtime deps instead.
+ with {:ok, netler_apps} <- Application.ensure_all_started(:netler, type),
+ {:ok, db_conn_apps} <- Application.ensure_all_started(:db_connection, type) do
+ {:ok, Enum.uniq(netler_apps ++ db_conn_apps)}
+ end
+ end
+
+ @impl Ecto.Adapter
+ def loaders(:boolean, type), do: [&bool_decode/1, type]
+ def loaders(:binary_id, type), do: [Ecto.UUID, type]
+ def loaders(_, type), do: [type]
+
+ @impl Ecto.Adapter
+ def dumpers(:boolean, type), do: [type, &bool_encode/1]
+ def dumpers(:binary_id, type), do: [type, Ecto.UUID]
+ def dumpers(_, type), do: [type]
+
+ @impl Ecto.Adapter.Schema
+ def autogenerate(:binary_id), do: Ecto.UUID.generate()
+ def autogenerate(:embed_id), do: Ecto.UUID.generate()
+ def autogenerate(type), do: super(type)
+
+ @impl Ecto.Adapter.Migration
+ def supports_ddl_transaction?, do: false
+
+ @impl Ecto.Adapter.Migration
+ def lock_for_migrations(_meta, _opts, fun), do: fun.()
+
+ # MSSQL BIT column is stored as 0/1 integer; map back to Elixir boolean.
+ defp bool_decode(nil), do: {:ok, nil}
+ defp bool_decode(0), do: {:ok, false}
+ defp bool_decode(1), do: {:ok, true}
+ defp bool_decode(v) when is_boolean(v), do: {:ok, v}
+ defp bool_decode(_), do: :error
+
+ defp bool_encode(nil), do: {:ok, nil}
+ defp bool_encode(false), do: {:ok, 0}
+ defp bool_encode(true), do: {:ok, 1}
+ defp bool_encode(_), do: :error
+end
+end
diff --git a/lib/ex_sql_client/ecto/connection.ex b/lib/ex_sql_client/ecto/connection.ex
new file mode 100644
index 0000000..8731957
--- /dev/null
+++ b/lib/ex_sql_client/ecto/connection.ex
@@ -0,0 +1,1197 @@
+if Code.ensure_loaded?(Ecto.Adapters.SQL.Connection) do
+defmodule ExSqlClient.Ecto.Connection do
+ @moduledoc false
+
+ @behaviour Ecto.Adapters.SQL.Connection
+
+ alias ExSqlClient.Query
+ alias Ecto.Query.Tagged
+
+ @parent_as __MODULE__
+ alias Ecto.Query, as: EctoQuery
+ alias Ecto.Query.{BooleanExpr, ByExpr, JoinExpr, QueryExpr, WithExpr}
+
+ # ---------------------------------------------------------------------------
+ # Execution callbacks
+ # ---------------------------------------------------------------------------
+
+ @impl true
+ def child_spec(opts) do
+ DBConnection.child_spec(ExSqlClient.Protocol, opts)
+ end
+
+ @impl true
+ def prepare_execute(conn, _name, sql, params, opts) do
+ query = %Query{statement: IO.iodata_to_binary(sql)}
+ encoded = encode_params(params)
+ col_order = column_order_from_sql(query.statement)
+
+ case DBConnection.prepare_execute(conn, query, encoded, opts) do
+ {:ok, q, result} -> {:ok, q, normalize_result(result, col_order)}
+ {:error, _} = err -> err
+ end
+ end
+
+ @impl true
+ def execute(conn, %Query{} = query, params, opts) do
+ encoded = encode_params(params)
+ col_order = column_order_from_sql(query.statement)
+
+ case DBConnection.execute(conn, query, encoded, opts) do
+ {:ok, q, result} -> {:ok, q, normalize_result(result, col_order)}
+ {:error, _} = err -> err
+ end
+ end
+
+ def execute(conn, sql, params, opts) when is_binary(sql) or is_list(sql) do
+ query = %Query{statement: IO.iodata_to_binary(sql)}
+ encoded = encode_params(params)
+ col_order = column_order_from_sql(query.statement)
+
+ case DBConnection.prepare_execute(conn, query, encoded, opts) do
+ {:ok, q, result} -> {:ok, q, normalize_result(result, col_order)}
+ {:error, _} = err -> err
+ end
+ end
+
+ @impl true
+ def query(conn, sql, params, opts) do
+ query = %Query{statement: IO.iodata_to_binary(sql)}
+ encoded = encode_params(params)
+ col_order = column_order_from_sql(query.statement)
+
+ case DBConnection.prepare_execute(conn, query, encoded, opts) do
+ {:ok, _q, result} -> {:ok, normalize_result(result, col_order)}
+ {:error, _} = err -> err
+ end
+ end
+
+ @impl true
+ def query_many(_conn, _sql, _params, _opts) do
+ raise RuntimeError, "query_many/4 is not supported by ExSqlClient.Ecto"
+ end
+
+ @impl true
+ def stream(_conn, _sql, _params, _opts) do
+ raise RuntimeError,
+ "Repo.stream/2 is not supported by ExSqlClient.Ecto — cursors are not implemented"
+ end
+
+ @impl true
+ def to_constraints(%DBConnection.ConnectionError{message: msg}, _opts) do
+ cond do
+ # Unique key violation (duplicate key)
+ msg =~ "2601" or msg =~ "2627" ->
+ [unique: extract_constraint_name(msg)]
+
+ # Foreign key violation
+ msg =~ "547" ->
+ [foreign_key: extract_constraint_name(msg)]
+
+ true ->
+ []
+ end
+ end
+
+ def to_constraints(_, _opts), do: []
+
+ @impl true
+ def explain_query(conn, sql, params, opts) do
+ explain_sql =
+ "SET STATISTICS IO ON; SET STATISTICS TIME ON; #{sql}; SET STATISTICS IO OFF; SET STATISTICS TIME OFF;"
+
+ query(conn, explain_sql, params, opts)
+ end
+
+ @impl true
+ def execute_ddl(_command) do
+ raise RuntimeError,
+ "DDL/migrations are not supported by ExSqlClient.Ecto. " <>
+ "Use ExSqlClient.query/4 directly for DDL statements."
+ end
+
+ @impl true
+ def ddl_logs(_result), do: []
+
+ @impl true
+ def table_exists_query(table) do
+ {"SELECT 1 FROM sys.tables WHERE [name] = @1", [table]}
+ end
+
+ # ---------------------------------------------------------------------------
+ # Result normalisation
+ # ---------------------------------------------------------------------------
+
+ # Synthetic row injected by the .NET side for DML without an OUTPUT clause.
+ # The C# layer sets this when ExecuteReader returns no rows but RecordsAffected >= 0.
+ defp normalize_result([%{"__rows_affected__" => n}], _col_order) do
+ %{columns: nil, rows: nil, num_rows: n}
+ end
+
+ defp normalize_result(nil, _col_order), do: %{columns: nil, rows: [], num_rows: 0}
+ # For an empty result (SELECT with 0 rows or DDL), return an empty row list
+ # so that Ecto's Enum.map/2 over rows succeeds. DML without OUTPUT never
+ # reaches here after the C# __rows_affected__ injection.
+ defp normalize_result([], col_order), do: %{columns: col_order, rows: [], num_rows: 0}
+
+ defp normalize_result(rows, col_order) when is_list(rows) do
+ raw_keys = rows |> List.first() |> Map.keys()
+
+ # Use the SQL-derived column order when it covers exactly the same set of
+ # keys. Elixir maps (small maps ≤ 32 keys) sort string keys alphabetically,
+ # which loses the SELECT order Ecto relies on for positional row loading.
+ ordered_keys =
+ if col_order && length(col_order) == length(raw_keys) &&
+ Enum.sort(col_order) == Enum.sort(raw_keys) do
+ col_order
+ else
+ raw_keys
+ end
+
+ normalized = Enum.map(rows, fn row -> Enum.map(ordered_keys, &Map.get(row, &1)) end)
+ %{columns: ordered_keys, rows: normalized, num_rows: length(normalized)}
+ end
+
+ # ---------------------------------------------------------------------------
+ # Column-order extraction from SQL
+ # ---------------------------------------------------------------------------
+
+ # Returns the ordered list of result-column names as the SQL SELECT (or OUTPUT)
+ # clause prescribes them, so that normalize_result/2 can re-key the unordered
+ # Elixir maps returned by Netler/MessagePack into the correct positional order.
+ defp column_order_from_sql(sql) when is_binary(sql) do
+ # OUTPUT clause takes precedence (INSERT/UPDATE/DELETE with RETURNING).
+ # Match the entire OUTPUT ... sequence and then scan within it.
+ case Regex.run(
+ ~r/OUTPUT\s+((?:(?:INSERTED|DELETED)\.\[[^\]]+\](?:,\s*)?)+)/i,
+ sql
+ ) do
+ [_, output_clause] ->
+ Regex.scan(~r/(?:INSERTED|DELETED)\.\[([^\]]+)\]/, output_clause)
+ |> Enum.map(fn [_, col] -> col end)
+
+ nil ->
+ # SELECT … FROM: capture the projection list and extract the trailing
+ # [identifier] of each comma-separated item (handles "t0.[col]" and
+ # "expr AS [alias]").
+ case Regex.run(
+ ~r/\ASELECT\s+(?:DISTINCT\s+)?(?:TOP\([^)]+\)\s+)?(.+?)\s+FROM\s/is,
+ sql
+ ) do
+ [_, select_clause] ->
+ select_clause
+ |> split_select_list()
+ |> Enum.map(&extract_bracketed_name/1)
+ |> Enum.reject(&is_nil/1)
+
+ _ ->
+ nil
+ end
+ end
+ end
+
+ defp column_order_from_sql(_), do: nil
+
+ defp extract_bracketed_name(item) do
+ case Regex.run(~r/\[([^\]]+)\]\s*\z/, String.trim(item)) do
+ [_, name] -> name
+ _ -> nil
+ end
+ end
+
+ # Split a SELECT projection list on commas that are NOT inside parentheses.
+ defp split_select_list(str) do
+ {items, current, _depth} =
+ String.graphemes(str)
+ |> Enum.reduce({[], "", 0}, fn
+ "(", {items, current, depth} -> {items, current <> "(", depth + 1}
+ ")", {items, current, depth} -> {items, current <> ")", depth - 1}
+ ",", {items, current, 0} -> {[current | items], "", 0}
+ char, {items, current, depth} -> {items, current <> char, depth}
+ end)
+
+ Enum.reverse([current | items])
+ end
+
+ # ---------------------------------------------------------------------------
+ # Parameter encoding
+ # ---------------------------------------------------------------------------
+
+ # Ecto passes params as positional list [v1, v2, ...].
+ # The .NET side expects named params: @1, @2, ... → %{"1" => v1, "2" => v2, ...}
+ defp encode_params([]), do: %{}
+
+ defp encode_params(params) when is_list(params) do
+ params
+ |> Enum.with_index(1)
+ |> Map.new(fn {val, idx} -> {Integer.to_string(idx), encode_value(val)} end)
+ end
+
+ defp encode_params(%{} = params), do: params
+
+ defp encode_value(nil), do: nil
+ defp encode_value(true), do: 1
+ defp encode_value(false), do: 0
+ defp encode_value(%{} = map), do: map
+ defp encode_value(v), do: v
+
+ defp extract_constraint_name(msg) do
+ case Regex.run(~r/'([^']+)'/, msg) do
+ [_, name] -> name
+ _ -> "unknown_constraint"
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # SQL generation (MSSQL / SQL Server dialect)
+ # Based on Ecto.Adapters.Tds.Connection, Apache 2.0 licensed.
+ # Key MSSQL differences: TOP(n), OFFSET…FETCH, OUTPUT INSERTED/DELETED,
+ # [bracket] identifiers, @1 @2 … parameter markers.
+ # ---------------------------------------------------------------------------
+
+ binary_ops = [
+ ==: " = ",
+ !=: " <> ",
+ <=: " <= ",
+ >=: " >= ",
+ <: " < ",
+ >: " > ",
+ +: " + ",
+ -: " - ",
+ *: " * ",
+ /: " / ",
+ and: " AND ",
+ or: " OR ",
+ ilike: " LIKE ",
+ like: " LIKE "
+ ]
+
+ @binary_ops Keyword.keys(binary_ops)
+
+ Enum.map(binary_ops, fn {op, str} ->
+ defp handle_call(unquote(op), 2), do: {:binary_op, unquote(str)}
+ end)
+
+ defp handle_call(fun, _arity), do: {:fun, Atom.to_string(fun)}
+
+ @impl true
+ def all(query, as_prefix \\ []) do
+ sources = create_names(query, as_prefix)
+
+ cte = cte(query, sources)
+ from = from(query, sources)
+ select = select(query, sources)
+ join = join(query, sources)
+ where = where(query, sources)
+ group_by = group_by(query, sources)
+ having = having(query, sources)
+ combinations = combinations(query, as_prefix)
+ order_by = order_by(query, sources)
+ offset = offset(query, sources)
+ lock = lock(query, sources)
+
+ if query.offset != nil and query.order_bys == [],
+ do: error!(query, "ORDER BY is mandatory when OFFSET is set")
+
+ [cte, select, from, join, where, group_by, having, combinations, order_by, lock | offset]
+ end
+
+ @impl true
+ def update_all(query) do
+ sources = create_names(query, [])
+ cte = cte(query, sources)
+ {table, name, _model} = elem(sources, 0)
+
+ fields = update_fields(query, sources)
+ from = " FROM #{table} AS #{name}"
+ join = join(query, sources)
+ where = where(query, sources)
+ lock = lock(query, sources)
+
+ [
+ cte,
+ "UPDATE ",
+ name,
+ " SET ",
+ fields,
+ returning(query, 0, "INSERTED"),
+ from,
+ join,
+ where | lock
+ ]
+ end
+
+ @impl true
+ def delete_all(query) do
+ sources = create_names(query, [])
+ cte = cte(query, sources)
+ {table, name, _model} = elem(sources, 0)
+
+ delete = "DELETE #{name}"
+ from = " FROM #{table} AS #{name}"
+ join = join(query, sources)
+ where = where(query, sources)
+ lock = lock(query, sources)
+
+ [cte, delete, returning(query, 0, "DELETED"), from, join, where | lock]
+ end
+
+ @impl true
+ def insert(prefix, table, header, rows, on_conflict, returning, placeholders) do
+ counter_offset = length(placeholders) + 1
+ [] = on_conflict(on_conflict, header)
+ returning = returning(returning, "INSERTED")
+
+ values =
+ if header == [] do
+ [returning, " DEFAULT VALUES"]
+ else
+ [
+ ?\s,
+ ?(,
+ quote_names(header),
+ ?),
+ returning
+ | insert_all(rows, counter_offset)
+ ]
+ end
+
+ ["INSERT INTO ", quote_table(prefix, table), values]
+ end
+
+ defp on_conflict({:raise, _, []}, _header), do: []
+
+ defp on_conflict({_, _, _}, _header) do
+ error!(nil, "ExSqlClient.Ecto adapter supports only on_conflict: :raise")
+ end
+
+ defp insert_all(%EctoQuery{} = query, _counter) do
+ [?\s, all(query)]
+ end
+
+ defp insert_all(rows, counter) do
+ sql =
+ intersperse_reduce(rows, ",", counter, fn row, counter ->
+ {row, counter} = insert_each(row, counter)
+ {[?(, row, ?)], counter}
+ end)
+ |> elem(0)
+
+ [" VALUES " | sql]
+ end
+
+ defp insert_each(values, counter) do
+ intersperse_reduce(values, ", ", counter, fn
+ nil, counter ->
+ {"DEFAULT", counter}
+
+ {%EctoQuery{} = query, params_counter}, counter ->
+ {[?(, all(query), ?)], counter + params_counter}
+
+ {:placeholder, placeholder_index}, counter ->
+ {[?@ | placeholder_index], counter}
+
+ _, counter ->
+ {[?@ | Integer.to_string(counter)], counter + 1}
+ end)
+ end
+
+ @impl true
+ def update(prefix, table, fields, filters, returning) do
+ {fields, count} =
+ intersperse_reduce(fields, ", ", 1, fn field, acc ->
+ {[quote_name(field), " = @", Integer.to_string(acc)], acc + 1}
+ end)
+
+ {filters, _count} =
+ intersperse_reduce(filters, " AND ", count, fn
+ {field, nil}, acc ->
+ {[quote_name(field), " IS NULL"], acc}
+
+ {field, _value}, acc ->
+ {[quote_name(field), " = @", Integer.to_string(acc)], acc + 1}
+
+ field, acc ->
+ {[quote_name(field), " = @", Integer.to_string(acc)], acc + 1}
+ end)
+
+ [
+ "UPDATE ",
+ quote_table(prefix, table),
+ " SET ",
+ fields,
+ returning(returning, "INSERTED"),
+ " WHERE " | filters
+ ]
+ end
+
+ @impl true
+ def delete(prefix, table, filters, returning) do
+ {filters, _} =
+ intersperse_reduce(filters, " AND ", 1, fn
+ {field, nil}, acc ->
+ {[quote_name(field), " IS NULL"], acc}
+
+ {field, _value}, acc ->
+ {[quote_name(field), " = @", Integer.to_string(acc)], acc + 1}
+
+ field, acc ->
+ {[quote_name(field), " = @", Integer.to_string(acc)], acc + 1}
+ end)
+
+ [
+ "DELETE FROM ",
+ quote_table(prefix, table),
+ returning(returning, "DELETED"),
+ " WHERE " | filters
+ ]
+ end
+
+ # ---------------------------------------------------------------------------
+ # SELECT helpers
+ # ---------------------------------------------------------------------------
+
+ defp select(%{select: %{fields: fields}, distinct: distinct} = query, sources) do
+ [
+ "SELECT ",
+ distinct(distinct, sources, query),
+ limit(query, sources),
+ select(fields, sources, query)
+ ]
+ end
+
+ defp distinct(nil, _sources, _query), do: []
+ defp distinct(%ByExpr{expr: true}, _sources, _query), do: "DISTINCT "
+ defp distinct(%ByExpr{expr: false}, _sources, _query), do: []
+
+ defp distinct(%ByExpr{expr: exprs}, _sources, query) when is_list(exprs) do
+ error!(
+ query,
+ "DISTINCT with multiple columns is not supported by MSSQL. " <>
+ "Please use distinct(true) if you need distinct resultset"
+ )
+ end
+
+ defp select([], _sources, _query), do: "CAST(1 as bit)"
+
+ defp select(fields, sources, query) do
+ Enum.map_intersperse(fields, ", ", fn
+ {:&, _, [idx]} ->
+ case elem(sources, idx) do
+ {nil, source, nil} ->
+ error!(
+ query,
+ "ExSqlClient.Ecto does not support selecting all fields from fragment #{source}. " <>
+ "Please specify exactly which fields you want to select"
+ )
+
+ {source, _, nil} ->
+ error!(
+ query,
+ "ExSqlClient.Ecto does not support selecting all fields from #{source} without a schema. " <>
+ "Please specify a schema or specify exactly which fields you want in projection"
+ )
+
+ {_, source, _} ->
+ source
+ end
+
+ {key, value} ->
+ [select_expr(value, sources, query), " AS ", quote_name(key)]
+
+ value ->
+ select_expr(value, sources, query)
+ end)
+ end
+
+ defp select_expr({:not, _, [expr]}, sources, query) do
+ [?~, ?(, select_expr(expr, sources, query), ?)]
+ end
+
+ defp select_expr(value, sources, query), do: expr(value, sources, query)
+
+ defp from(%{from: %{source: source, hints: hints}} = query, sources) do
+ {from, name} = get_source(query, sources, 0, source)
+ [" FROM ", from, " AS ", name, hints(hints)]
+ end
+
+ # ---------------------------------------------------------------------------
+ # CTE
+ # ---------------------------------------------------------------------------
+
+ defp cte(%{with_ctes: %WithExpr{queries: [_ | _] = queries}} = query, sources) do
+ ctes = Enum.map_intersperse(queries, ", ", &cte_expr(&1, sources, query))
+ ["WITH ", ctes, " "]
+ end
+
+ defp cte(%{with_ctes: _}, _), do: []
+
+ defp cte_expr({_name, %{materialized: materialized}, _cte}, _sources, query)
+ when is_boolean(materialized) do
+ error!(query, "ExSqlClient.Ecto does not support materialized CTEs")
+ end
+
+ defp cte_expr({name, opts, cte}, sources, query) do
+ operation_opt = Map.get(opts, :operation)
+
+ [
+ quote_name(name),
+ cte_header(cte, query),
+ " AS ",
+ cte_query(cte, sources, query, operation_opt)
+ ]
+ end
+
+ defp cte_header(%QueryExpr{}, query) do
+ error!(query, "ExSqlClient.Ecto does not support fragment in CTE")
+ end
+
+ defp cte_header(%EctoQuery{select: %{fields: fields}} = query, _) do
+ [
+ " (",
+ Enum.map_intersperse(fields, ",", fn
+ {key, _} ->
+ quote_name(key)
+
+ other ->
+ error!(
+ query,
+ "ExSqlClient.Ecto expected field name or alias in CTE header, instead got #{inspect(other)}"
+ )
+ end),
+ ?)
+ ]
+ end
+
+ defp cte_query(query, sources, parent_query, nil) do
+ cte_query(query, sources, parent_query, :all)
+ end
+
+ defp cte_query(%EctoQuery{} = query, sources, parent_query, :all) do
+ query = put_in(query.aliases[@parent_as], {parent_query, sources})
+ [?(, all(query, subquery_as_prefix(sources)), ?)]
+ end
+
+ defp cte_query(%EctoQuery{} = query, _sources, _parent_query, operation) do
+ error!(query, "ExSqlClient.Ecto does not support data-modifying CTEs (operation: #{operation})")
+ end
+
+ # ---------------------------------------------------------------------------
+ # UPDATE fields
+ # ---------------------------------------------------------------------------
+
+ defp update_fields(%EctoQuery{updates: updates} = query, sources) do
+ for(
+ %{expr: expr} <- updates,
+ {op, kw} <- expr,
+ {key, value} <- kw,
+ do: update_op(op, key, value, sources, query)
+ )
+ |> Enum.intersperse(", ")
+ end
+
+ defp update_op(:set, key, value, sources, query) do
+ {_table, name, _model} = elem(sources, 0)
+ [name, ?., quote_name(key), " = " | expr(value, sources, query)]
+ end
+
+ defp update_op(:inc, key, value, sources, query) do
+ {_table, name, _model} = elem(sources, 0)
+ quoted = quote_name(key)
+ [name, ?., quoted, " = ", name, ?., quoted, " + " | expr(value, sources, query)]
+ end
+
+ defp update_op(command, _key, _value, _sources, query) do
+ error!(query, "Unknown update operation #{inspect(command)} for ExSqlClient.Ecto")
+ end
+
+ # ---------------------------------------------------------------------------
+ # JOIN
+ # ---------------------------------------------------------------------------
+
+ defp join(%{joins: []}, _sources), do: []
+
+ defp join(%{joins: joins} = query, sources) do
+ [
+ ?\s,
+ Enum.map_intersperse(joins, ?\s, fn
+ %JoinExpr{on: %QueryExpr{expr: expr}, qual: qual, ix: ix, source: source, hints: hints} ->
+ {join, name} = get_source(query, sources, ix, source)
+ qual_text = join_qual(qual, query)
+ join = join || ["(", expr(source, sources, query) | ")"]
+ [qual_text, join, " AS ", name, hints(hints) | join_on(qual, expr, sources, query)]
+ end)
+ ]
+ end
+
+ defp join_on(:cross, true, _sources, _query), do: []
+ defp join_on(:inner_lateral, true, _sources, _query), do: []
+ defp join_on(:left_lateral, true, _sources, _query), do: []
+ defp join_on(_qual, true, _sources, _query), do: [" ON 1 = 1"]
+ defp join_on(_qual, expr, sources, query), do: [" ON " | expr(expr, sources, query)]
+
+ defp join_qual(:inner, _), do: "INNER JOIN "
+ defp join_qual(:left, _), do: "LEFT OUTER JOIN "
+ defp join_qual(:right, _), do: "RIGHT OUTER JOIN "
+ defp join_qual(:full, _), do: "FULL OUTER JOIN "
+ defp join_qual(:cross, _), do: "CROSS JOIN "
+ defp join_qual(:inner_lateral, _), do: "CROSS APPLY "
+ defp join_qual(:left_lateral, _), do: "OUTER APPLY "
+
+ defp join_qual(qual, query),
+ do: error!(query, "join qualifier #{inspect(qual)} is not supported in ExSqlClient.Ecto")
+
+ # ---------------------------------------------------------------------------
+ # WHERE / HAVING
+ # ---------------------------------------------------------------------------
+
+ defp where(%EctoQuery{wheres: wheres} = query, sources) do
+ boolean(" WHERE ", wheres, sources, query)
+ end
+
+ defp having(%EctoQuery{havings: havings} = query, sources) do
+ boolean(" HAVING ", havings, sources, query)
+ end
+
+ defp group_by(%{group_bys: []}, _sources), do: []
+
+ defp group_by(%{group_bys: group_bys} = query, sources) do
+ [
+ " GROUP BY "
+ | Enum.map_intersperse(group_bys, ", ", fn %ByExpr{expr: expr} ->
+ Enum.map_intersperse(expr, ", ", &top_level_expr(&1, sources, query))
+ end)
+ ]
+ end
+
+ defp order_by(%{order_bys: []}, _sources), do: []
+
+ defp order_by(%{order_bys: order_bys} = query, sources) do
+ [
+ " ORDER BY "
+ | Enum.map_intersperse(order_bys, ", ", fn %ByExpr{expr: expr} ->
+ Enum.map_intersperse(expr, ", ", &order_by_expr(&1, sources, query))
+ end)
+ ]
+ end
+
+ defp order_by_expr({dir, expr}, sources, query) do
+ str = top_level_expr(expr, sources, query)
+
+ case dir do
+ :asc -> str
+ :desc -> [str | " DESC"]
+ _ -> error!(query, "#{dir} is not supported in ORDER BY in MSSQL")
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # LIMIT / OFFSET (MSSQL: TOP(n) in SELECT; OFFSET…FETCH for pagination)
+ # ---------------------------------------------------------------------------
+
+ defp limit(%EctoQuery{limit: nil}, _sources), do: []
+
+ defp limit(%EctoQuery{limit: %{with_ties: true}} = query, _sources) do
+ error!(query, "ExSqlClient.Ecto does not support the :with_ties limit option")
+ end
+
+ defp limit(%EctoQuery{limit: %{expr: expr}} = query, sources) do
+ case Map.get(query, :offset) do
+ nil -> ["TOP(", expr(expr, sources, query), ") "]
+ _ -> []
+ end
+ end
+
+ defp offset(%{offset: nil}, _sources), do: []
+
+ defp offset(%EctoQuery{offset: _, limit: nil} = query, _sources) do
+ error!(query, "You must provide a limit while using an offset")
+ end
+
+ defp offset(%{offset: offset, limit: limit} = query, sources) do
+ [
+ " OFFSET ",
+ expr(offset.expr, sources, query),
+ " ROW",
+ " FETCH NEXT ",
+ expr(limit.expr, sources, query),
+ " ROWS ONLY"
+ ]
+ end
+
+ # ---------------------------------------------------------------------------
+ # Hints / Lock / Combinations
+ # ---------------------------------------------------------------------------
+
+ defp hints([_ | _] = hints), do: [" WITH (", Enum.intersperse(hints, ", "), ?)]
+ defp hints([]), do: []
+
+ defp lock(%{lock: nil}, _sources), do: []
+ defp lock(%{lock: binary}, _sources) when is_binary(binary), do: [" OPTION (", binary, ?)]
+ defp lock(%{lock: expr} = query, sources), do: [" OPTION (", expr(expr, sources, query), ?)]
+
+ defp combinations(%{combinations: combinations}, as_prefix) do
+ Enum.map(combinations, fn
+ {:union, query} -> [" UNION (", all(query, as_prefix), ")"]
+ {:union_all, query} -> [" UNION ALL (", all(query, as_prefix), ")"]
+ {:except, query} -> [" EXCEPT (", all(query, as_prefix), ")"]
+ {:except_all, query} -> [" EXCEPT ALL (", all(query, as_prefix), ")"]
+ {:intersect, query} -> [" INTERSECT (", all(query, as_prefix), ")"]
+ {:intersect_all, query} -> [" INTERSECT ALL (", all(query, as_prefix), ")"]
+ end)
+ end
+
+ # ---------------------------------------------------------------------------
+ # Boolean expr builder
+ # ---------------------------------------------------------------------------
+
+ defp boolean(_name, [], _sources, _query), do: []
+
+ defp boolean(name, [%{expr: expr, op: op} | query_exprs], sources, query) do
+ [
+ name
+ | Enum.reduce(query_exprs, {op, paren_expr(expr, sources, query)}, fn
+ %BooleanExpr{expr: expr, op: op}, {op, acc} ->
+ {op, [acc, operator_to_boolean(op), paren_expr(expr, sources, query)]}
+
+ %BooleanExpr{expr: expr, op: op}, {_, acc} ->
+ {op, [?(, acc, ?), operator_to_boolean(op), paren_expr(expr, sources, query)]}
+ end)
+ |> elem(1)
+ ]
+ end
+
+ defp operator_to_boolean(:and), do: " AND "
+ defp operator_to_boolean(:or), do: " OR "
+
+ defp parens_for_select([first_expr | _] = expr) do
+ if is_binary(first_expr) and String.match?(first_expr, ~r/^\s*select\s/i) do
+ [?(, expr, ?)]
+ else
+ expr
+ end
+ end
+
+ defp paren_expr(true, _sources, _query), do: ["(1 = 1)"]
+ defp paren_expr(false, _sources, _query), do: ["(1 = 0)"]
+ defp paren_expr(expr, sources, query), do: [?(, expr(expr, sources, query), ?)]
+
+ defp top_level_expr(%Ecto.SubQuery{query: query}, sources, parent_query) do
+ combinations =
+ Enum.map(query.combinations, fn {type, combo_query} ->
+ {type, put_in(combo_query.aliases[@parent_as], {parent_query, sources})}
+ end)
+
+ query = put_in(query.combinations, combinations)
+ query = put_in(query.aliases[@parent_as], {parent_query, sources})
+ [all(query, subquery_as_prefix(sources))]
+ end
+
+ defp top_level_expr(other, sources, parent_query), do: expr(other, sources, parent_query)
+
+ # ---------------------------------------------------------------------------
+ # Expression compiler
+ # ---------------------------------------------------------------------------
+
+ # Parameter reference: {:^, [], [idx]} → @1, @2, …
+ defp expr({:^, [], [idx]}, _sources, _query) do
+ "@#{idx + 1}"
+ end
+
+ defp expr({{:., _, [{:parent_as, _, [as]}, field]}, _, []}, _sources, query)
+ when is_atom(field) or is_binary(field) do
+ {ix, sources} = get_parent_sources_ix(query, as)
+ {_, name, _} = elem(sources, ix)
+ [name, ?. | quote_name(field)]
+ end
+
+ defp expr({{:., _, [{:&, _, [idx]}, field]}, _, []}, sources, _query)
+ when is_atom(field) or is_binary(field) do
+ {_, name, _} = elem(sources, idx)
+ [name, ?. | quote_name(field)]
+ end
+
+ defp expr({:&, _, [idx]}, sources, _query) do
+ {_table, source, _schema} = elem(sources, idx)
+ source
+ end
+
+ defp expr({:&, _, [idx, fields, _counter]}, sources, query) do
+ {_table, name, schema} = elem(sources, idx)
+
+ if is_nil(schema) and is_nil(fields) do
+ error!(
+ query,
+ "ExSqlClient.Ecto requires a schema module when using selector #{inspect(name)} but " <>
+ "none was given. Please specify a schema or specify exactly which fields you want."
+ )
+ end
+
+ Enum.map_join(fields, ", ", &"#{name}.#{quote_name(&1)}")
+ end
+
+ defp expr({:in, _, [_left, []]}, _sources, _query), do: "0=1"
+
+ defp expr({:in, _, [left, right]}, sources, query) when is_list(right) do
+ args = Enum.map_join(right, ",", &expr(&1, sources, query))
+ [expr(left, sources, query), " IN (", args | ")"]
+ end
+
+ defp expr({:in, _, [_, {:^, _, [_, 0]}]}, _sources, _query), do: "0=1"
+
+ defp expr({:in, _, [left, {:^, _, [idx, length]}]}, sources, query) do
+ args = list_param_to_args(idx, length)
+ [expr(left, sources, query), " IN (", args | ")"]
+ end
+
+ defp expr({:in, _, [left, %Ecto.SubQuery{} = subquery]}, sources, query) do
+ [expr(left, sources, query), " IN ", expr(subquery, sources, query)]
+ end
+
+ defp expr({:in, _, [left, right]}, sources, query) do
+ [expr(left, sources, query), " = ANY(", expr(right, sources, query) | ")"]
+ end
+
+ defp expr({:is_nil, _, [arg]}, sources, query) do
+ "#{expr(arg, sources, query)} IS NULL"
+ end
+
+ defp expr({:not, _, [expr]}, sources, query) do
+ ["NOT (", expr(expr, sources, query) | ")"]
+ end
+
+ defp expr({:filter, _, _}, _sources, query) do
+ error!(query, "ExSqlClient.Ecto does not support aggregate filters")
+ end
+
+ defp expr(%Ecto.SubQuery{} = subquery, sources, parent_query) do
+ [?(, top_level_expr(subquery, sources, parent_query), ?)]
+ end
+
+ defp expr({:fragment, _, [kw]}, _sources, query) when is_list(kw) or tuple_size(kw) == 3 do
+ error!(query, "ExSqlClient.Ecto does not support keyword or interpolated fragments")
+ end
+
+ defp expr({:fragment, _, parts}, sources, query) do
+ Enum.map(parts, fn
+ {:raw, part} -> part
+ {:expr, expr} -> expr(expr, sources, query)
+ end)
+ |> parens_for_select()
+ end
+
+ defp expr({:values, _, [types, idx, num_rows]}, _, _query) do
+ [?(, values_list(types, idx + 1, num_rows), ?)]
+ end
+
+ defp expr({:identifier, _, [literal]}, _sources, _query) do
+ quote_name(literal)
+ end
+
+ defp expr({:constant, _, [literal]}, _sources, _query) when is_binary(literal) do
+ [?', escape_string(literal), ?']
+ end
+
+ defp expr({:constant, _, [literal]}, _sources, _query) when is_number(literal) do
+ [to_string(literal)]
+ end
+
+ defp expr({:splice, _, [{:^, _, [idx, length]}]}, _sources, _query) do
+ list_param_to_args(idx, length)
+ end
+
+ defp expr({:selected_as, _, [name]}, _sources, _query) do
+ [quote_name(name)]
+ end
+
+ defp expr({:datetime_add, _, [datetime, count, interval]}, sources, query) do
+ [
+ "DATEADD(",
+ interval,
+ ", ",
+ interval_count(count, sources, query),
+ ", CAST(",
+ expr(datetime, sources, query),
+ " AS datetime2(6)))"
+ ]
+ end
+
+ defp expr({:date_add, _, [date, count, interval]}, sources, query) do
+ [
+ "CAST(DATEADD(",
+ interval,
+ ", ",
+ interval_count(count, sources, query),
+ ", CAST(",
+ expr(date, sources, query),
+ " AS datetime2(6))" | ") AS date)"
+ ]
+ end
+
+ defp expr({:count, _, []}, _sources, _query), do: "count(*)"
+
+ defp expr({:json_extract_path, _, _}, _sources, query) do
+ error!(
+ query,
+ "ExSqlClient.Ecto does not support json_extract_path, use fragment with JSON_VALUE/JSON_QUERY"
+ )
+ end
+
+ defp expr({fun, _, args}, sources, query) when is_atom(fun) and is_list(args) do
+ {modifier, args} =
+ case args do
+ [rest, :distinct] -> {"DISTINCT ", [rest]}
+ _ -> {"", args}
+ end
+
+ case handle_call(fun, length(args)) do
+ {:binary_op, op} ->
+ [left, right] = args
+ [op_to_binary(left, sources, query), op | op_to_binary(right, sources, query)]
+
+ {:fun, fun} ->
+ [
+ fun,
+ ?(,
+ modifier,
+ Enum.map_intersperse(args, ", ", &top_level_expr(&1, sources, query)),
+ ?)
+ ]
+ end
+ end
+
+ defp expr(list, sources, query) when is_list(list) do
+ Enum.map_join(list, ", ", &expr(&1, sources, query))
+ end
+
+ defp expr(string, _sources, _query) when is_binary(string) do
+ "N'#{escape_string(string)}'"
+ end
+
+ defp expr(%Decimal{exp: exp} = decimal, _sources, _query) do
+ [
+ "CAST(",
+ Decimal.to_string(decimal, :normal),
+ " as decimal(38, #{abs(exp)})",
+ ?)
+ ]
+ end
+
+ defp expr(%Tagged{value: binary, type: :binary}, _sources, _query) when is_binary(binary) do
+ hex = Base.encode16(binary, case: :lower)
+ "0x#{hex}"
+ end
+
+ defp expr(%Tagged{value: binary, type: :uuid}, _sources, _query) when is_binary(binary) do
+ binary
+ end
+
+ defp expr(%Tagged{value: other, type: :integer}, sources, query) do
+ "CAST(#{expr(other, sources, query)} AS bigint)"
+ end
+
+ defp expr(%Tagged{value: other, type: type}, sources, query) do
+ "CAST(#{expr(other, sources, query)} AS #{column_type(type, [])})"
+ end
+
+ defp expr(nil, _sources, _query), do: "NULL"
+ defp expr(true, _sources, _query), do: "1"
+ defp expr(false, _sources, _query), do: "0"
+
+ defp expr(literal, _sources, _query) when is_integer(literal) do
+ Integer.to_string(literal)
+ end
+
+ defp expr(literal, _sources, _query) when is_float(literal) do
+ Float.to_string(literal)
+ end
+
+ defp expr(field, _sources, query) do
+ error!(query, "unsupported MSSQL expression: `#{inspect(field)}`")
+ end
+
+ defp values_list(types, idx, num_rows) do
+ rows = :lists.seq(1, num_rows, 1)
+
+ [
+ "VALUES ",
+ intersperse_reduce(rows, ?,, idx, fn _, idx ->
+ {value, idx} = values_expr(types, idx)
+ {[?(, value, ?)], idx}
+ end)
+ |> elem(0)
+ ]
+ end
+
+ defp values_expr(types, idx) do
+ intersperse_reduce(types, ?,, idx, fn {_field, type}, idx ->
+ {["CAST(", ?@, Integer.to_string(idx), " AS ", column_type(type, []), ?)], idx + 1}
+ end)
+ end
+
+ defp op_to_binary({op, _, [_, _]} = expr, sources, query) when op in @binary_ops do
+ paren_expr(expr, sources, query)
+ end
+
+ defp op_to_binary({:is_nil, _, [_]} = expr, sources, query) do
+ paren_expr(expr, sources, query)
+ end
+
+ defp op_to_binary(expr, sources, query), do: expr(expr, sources, query)
+
+ defp interval_count(count, _sources, _query) when is_integer(count) do
+ Integer.to_string(count)
+ end
+
+ defp interval_count(count, _sources, _query) when is_float(count) do
+ :erlang.float_to_binary(count, [:compact, decimals: 16])
+ end
+
+ defp interval_count(count, sources, query), do: expr(count, sources, query)
+
+ # ---------------------------------------------------------------------------
+ # OUTPUT clause (MSSQL RETURNING equivalent)
+ # ---------------------------------------------------------------------------
+
+ defp returning([], _verb), do: []
+
+ defp returning(returning, verb) when is_list(returning) do
+ [" OUTPUT ", Enum.map_intersperse(returning, ", ", &[verb, ?., quote_name(&1)])]
+ end
+
+ defp returning(%{select: nil}, _, _), do: []
+
+ defp returning(%{select: %{fields: fields}} = query, idx, verb) do
+ [
+ " OUTPUT "
+ | Enum.map_intersperse(fields, ", ", fn
+ {{:., _, [{:&, _, [^idx]}, key]}, _, _} -> [verb, ?., quote_name(key)]
+ _ -> error!(query, "MSSQL can only return table #{verb} columns")
+ end)
+ ]
+ end
+
+ # ---------------------------------------------------------------------------
+ # Source name generation
+ # ---------------------------------------------------------------------------
+
+ defp create_names(%{sources: sources}, as_prefix) do
+ create_names(sources, 0, tuple_size(sources), as_prefix) |> List.to_tuple()
+ end
+
+ defp create_names(sources, pos, limit, as_prefix) when pos < limit do
+ [create_name(sources, pos, as_prefix) | create_names(sources, pos + 1, limit, as_prefix)]
+ end
+
+ defp create_names(_sources, pos, pos, as_prefix), do: [as_prefix]
+
+ defp subquery_as_prefix(sources) do
+ [?s | :erlang.element(tuple_size(sources), sources)]
+ end
+
+ defp create_name(sources, pos, as_prefix) do
+ case elem(sources, pos) do
+ {:fragment, _, _} ->
+ {nil, as_prefix ++ [?f | Integer.to_string(pos)], nil}
+
+ {:values, _, _} ->
+ {nil, as_prefix ++ [?v | Integer.to_string(pos)], nil}
+
+ {table, model, prefix} ->
+ name = as_prefix ++ [create_alias(table) | Integer.to_string(pos)]
+ {quote_table(prefix, table), name, model}
+
+ %Ecto.SubQuery{} ->
+ {nil, as_prefix ++ [?s | Integer.to_string(pos)], nil}
+ end
+ end
+
+ defp create_alias(<>)
+ when first in ?a..?z or first in ?A..?Z,
+ do: first
+
+ defp create_alias(_), do: ?t
+
+ # ---------------------------------------------------------------------------
+ # Source helpers
+ # ---------------------------------------------------------------------------
+
+ defp get_source(query, sources, ix, source) do
+ {expr, name, _schema} = elem(sources, ix)
+ {expr || expr(source, sources, query), name}
+ end
+
+ defp get_parent_sources_ix(query, as) do
+ case query.aliases[@parent_as] do
+ {%{aliases: %{^as => ix}}, sources} -> {ix, sources}
+ {%{aliases: aliases}, _sources} -> error!(query, "unknown alias `#{as}`, aliases: #{inspect(aliases)}")
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # Quoting / identifier helpers
+ # ---------------------------------------------------------------------------
+
+ defp quote_name(name) when is_atom(name), do: quote_name(Atom.to_string(name))
+ defp quote_name(name) when is_binary(name), do: [?[, name, ?]]
+
+ defp quote_names(names), do: Enum.map_intersperse(names, ?,, "e_name/1)
+
+ defp quote_table(nil, name), do: quote_name(name)
+ defp quote_table(prefix, name), do: [quote_name(prefix), ?., quote_name(name)]
+
+ defp escape_string(value) when is_binary(value) do
+ String.replace(value, "'", "''")
+ end
+
+ defp list_param_to_args(idx, length) do
+ Enum.map_join(idx..(idx + length - 1), ", ", fn i -> "@#{i + 1}" end)
+ end
+
+ defp column_type(:string, _opts), do: "nvarchar(max)"
+ defp column_type(:binary, _opts), do: "varbinary(max)"
+ defp column_type(:boolean, _opts), do: "bit"
+ defp column_type(:integer, _opts), do: "bigint"
+ defp column_type(:float, _opts), do: "float"
+ defp column_type(:decimal, _opts), do: "decimal"
+ defp column_type(:date, _opts), do: "date"
+ defp column_type(:time, _opts), do: "time"
+ defp column_type(:naive_datetime, _opts), do: "datetime2"
+ defp column_type(:utc_datetime, _opts), do: "datetimeoffset"
+ defp column_type(:uuid, _opts), do: "uniqueidentifier"
+ defp column_type({:array, _inner}, _opts), do: raise("MSSQL does not support array types")
+ defp column_type(type, _opts), do: Atom.to_string(type)
+
+ # ---------------------------------------------------------------------------
+ # intersperse_reduce helper
+ # ---------------------------------------------------------------------------
+
+ defp intersperse_reduce(list, separator, user_acc, reducer, acc \\ [])
+
+ defp intersperse_reduce([], _separator, user_acc, _reducer, acc) do
+ {Enum.reverse(acc), user_acc}
+ end
+
+ defp intersperse_reduce([elem], _separator, user_acc, reducer, acc) do
+ {item, user_acc} = reducer.(elem, user_acc)
+ {Enum.reverse([item | acc]), user_acc}
+ end
+
+ defp intersperse_reduce([elem | rest], separator, user_acc, reducer, acc) do
+ {item, user_acc} = reducer.(elem, user_acc)
+ intersperse_reduce(rest, separator, user_acc, reducer, [separator, item | acc])
+ end
+
+ # ---------------------------------------------------------------------------
+ # error!/2 helper
+ # ---------------------------------------------------------------------------
+
+ defp error!(nil, message) do
+ raise ArgumentError, message
+ end
+
+ defp error!(query, message) do
+ raise Ecto.QueryError, query: query, message: message
+ end
+end
+end
diff --git a/lib/ex_sql_client/protocol.ex b/lib/ex_sql_client/protocol.ex
index ae1734d..9b7c28a 100644
--- a/lib/ex_sql_client/protocol.ex
+++ b/lib/ex_sql_client/protocol.ex
@@ -88,7 +88,7 @@ defmodule ExSqlClient.Protocol do
end
@impl true
- def handle_execute(query, params, _opts, state = %{status: :transaction}) do
+ def handle_execute(query, params, _opts, %{status: :transaction} = state) do
case Client.invoke(state.client, "ExecuteInTransaction", [
query.statement,
params,
@@ -121,7 +121,7 @@ defmodule ExSqlClient.Protocol do
end
@impl true
- def handle_begin(_opts, state = %{status: :idle}) do
+ def handle_begin(_opts, %{status: :idle} = state) do
case Client.invoke(state.client, "BeginTransaction", []) do
{:ok, transaction_id} ->
{:ok, :began, %{state | status: :transaction, transaction_id: transaction_id}}
@@ -132,7 +132,7 @@ defmodule ExSqlClient.Protocol do
end
@impl true
- def handle_rollback(_opts, state = %{status: :transaction}) do
+ def handle_rollback(_opts, %{status: :transaction} = state) do
case Client.invoke(state.client, "RollbackTransaction", [state.transaction_id]) do
{:ok, true} ->
{:ok, :rolledback, %{state | status: :idle, transaction_id: nil}}
@@ -143,7 +143,7 @@ defmodule ExSqlClient.Protocol do
end
@impl true
- def handle_commit(_opts, state = %{status: :transaction}) do
+ def handle_commit(_opts, %{status: :transaction} = state) do
case Client.invoke(state.client, "CommitTransaction", [state.transaction_id]) do
{:ok, true} ->
{:ok, :committed, %{state | status: :idle, transaction_id: nil}}
diff --git a/lib/ex_sql_client/query.ex b/lib/ex_sql_client/query.ex
index 9c5844e..12bdd86 100644
--- a/lib/ex_sql_client/query.ex
+++ b/lib/ex_sql_client/query.ex
@@ -9,4 +9,8 @@ defmodule ExSqlClient.Query do
def encode(_query, params, _), do: params
def decode(_, result, _opts), do: result
end
+
+ defimpl String.Chars, for: ExSqlClient.Query do
+ def to_string(%{statement: statement}), do: statement || ""
+ end
end
diff --git a/mix.exs b/mix.exs
index 9d4b1eb..0f6efce 100644
--- a/mix.exs
+++ b/mix.exs
@@ -32,6 +32,8 @@ defmodule ExSqlClient.MixProject do
[
{:netler, "~> 0.4"},
{:db_connection, "~> 2.5"},
+ {:ecto, "~> 3.10", optional: true},
+ {:ecto_sql, "~> 3.10", optional: true},
{:ex_doc, "~> 0.34", only: :dev, runtime: false},
{:credo, "~> 1.7", only: [:dev, :test], runtime: false},
{:testcontainers, "~> 1.0", only: [:dev, :test]},
diff --git a/mix.lock b/mix.lock
index be0439e..e012112 100644
--- a/mix.lock
+++ b/mix.lock
@@ -5,9 +5,12 @@
"connection": {:hex, :connection, "1.0.4", "a1cae72211f0eef17705aaededacac3eb30e6625b04a6117c1b2db6ace7d5976", [:mix], [], "hexpm", "4a0850c9be22a43af9920a71ab17c051f5f7d45c209e40269a1938832510e4d9"},
"credo": {:hex, :credo, "1.7.16", "a9f1389d13d19c631cb123c77a813dbf16449a2aebf602f590defa08953309d4", [:mix], [{:bunt, "~> 0.2.1 or ~> 1.0", [hex: :bunt, repo: "hexpm", optional: false]}, {:file_system, "~> 0.2 or ~> 1.0", [hex: :file_system, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "d0562af33756b21f248f066a9119e3890722031b6d199f22e3cf95550e4f1579"},
"db_connection": {:hex, :db_connection, "2.9.0", "a6a97c5c958a2d7091a58a9be40caf41ab496b0701d21e1d1abff3fa27a7f371", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "17d502eacaf61829db98facf6f20808ed33da6ccf495354a41e64fe42f9c509c"},
+ "decimal": {:hex, :decimal, "2.3.0", "3ad6255aa77b4a3c4f818171b12d237500e63525c2fd056699967a3e7ea20f62", [:mix], [], "hexpm", "a4d66355cb29cb47c3cf30e71329e58361cfcb37c34235ef3bf1d7bf3773aeac"},
"deep_merge": {:hex, :deep_merge, "1.0.0", "b4aa1a0d1acac393bdf38b2291af38cb1d4a52806cf7a4906f718e1feb5ee961", [:mix], [], "hexpm", "ce708e5f094b9cd4e8f2be4f00d2f4250c4095be93f8cd6d018c753894885430"},
"earmark": {:hex, :earmark, "1.4.3", "364ca2e9710f6bff494117dbbd53880d84bebb692dafc3a78eb50aa3183f2bfd", [:mix], [], "hexpm", "8cf8a291ebf1c7b9539e3cddb19e9cef066c2441b1640f13c34c1d3cfc825fec"},
"earmark_parser": {:hex, :earmark_parser, "1.4.44", "f20830dd6b5c77afe2b063777ddbbff09f9759396500cdbe7523efd58d7a339c", [:mix], [], "hexpm", "4778ac752b4701a5599215f7030989c989ffdc4f6df457c5f36938cc2d2a2750"},
+ "ecto": {:hex, :ecto, "3.13.5", "9d4a69700183f33bf97208294768e561f5c7f1ecf417e0fa1006e4a91713a834", [:mix], [{:decimal, "~> 2.0", [hex: :decimal, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "df9efebf70cf94142739ba357499661ef5dbb559ef902b68ea1f3c1fabce36de"},
+ "ecto_sql": {:hex, :ecto_sql, "3.13.4", "b6e9d07557ddba62508a9ce4a484989a5bb5e9a048ae0e695f6d93f095c25d60", [:mix], [{:db_connection, "~> 2.4.1 or ~> 2.5", [hex: :db_connection, repo: "hexpm", optional: false]}, {:ecto, "~> 3.13.0", [hex: :ecto, repo: "hexpm", optional: false]}, {:myxql, "~> 0.7", [hex: :myxql, repo: "hexpm", optional: true]}, {:postgrex, "~> 0.19 or ~> 1.0", [hex: :postgrex, repo: "hexpm", optional: true]}, {:tds, "~> 2.1.1 or ~> 2.2", [hex: :tds, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.0 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "2b38cf0749ca4d1c5a8bcbff79bbe15446861ca12a61f9fba604486cb6b62a14"},
"ex_doc": {:hex, :ex_doc, "0.40.1", "67542e4b6dde74811cfd580e2c0149b78010fd13001fda7cfeb2b2c2ffb1344d", [:mix], [{:earmark_parser, "~> 1.4.44", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "bcef0e2d360d93ac19f01a85d58f91752d930c0a30e2681145feea6bd3516e00"},
"file_system": {:hex, :file_system, "1.1.1", "31864f4685b0148f25bd3fbef2b1228457c0c89024ad67f7a81a3ffbc0bbad3a", [:mix], [], "hexpm", "7a15ff97dfe526aeefb090a7a9d3d03aa907e100e262a0f8f7746b78f8f87a5d"},
"fs": {:hex, :fs, "11.4.1", "11fb3153bb2e1de851b8263bb5698d526894853c73a525ebeb5e69108b2d25cd", [:rebar3], [], "hexpm", "dd00a61d89eac01d16d3fc51d5b0eb5f0722ef8e3c1a3a547cd086957f3260a9"},
diff --git a/test/ecto/ecto_adapter_test.exs b/test/ecto/ecto_adapter_test.exs
new file mode 100644
index 0000000..78ec908
--- /dev/null
+++ b/test/ecto/ecto_adapter_test.exs
@@ -0,0 +1,185 @@
+defmodule ExSqlClient.EctoAdapterTest do
+ use ExUnit.Case
+
+ # ---------------------------------------------------------------------------
+ # Repo + Schema setup
+ # ---------------------------------------------------------------------------
+
+ defmodule TestRepo do
+ use Ecto.Repo,
+ otp_app: :ex_sql_client,
+ adapter: ExSqlClient.Ecto
+ end
+
+ defmodule User do
+ use Ecto.Schema
+
+ schema "ecto_users" do
+ field(:name, :string)
+ field(:email, :string)
+ field(:age, :integer)
+ field(:active, :boolean)
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # setup_all: start repo, create test table
+ # ---------------------------------------------------------------------------
+
+ setup_all do
+ connection_string = Application.fetch_env!(:ex_sql_client, :test_connection_string)
+
+ start_supervised!(
+ {TestRepo, [connection_string: connection_string, pool_size: 2]}
+ )
+
+ # Create test table (idempotent)
+ TestRepo.query!("""
+ IF NOT EXISTS (
+ SELECT * FROM INFORMATION_SCHEMA.TABLES
+ WHERE TABLE_NAME = 'ecto_users'
+ )
+ BEGIN
+ CREATE TABLE [dbo].[ecto_users] (
+ [id] INT IDENTITY(1,1) PRIMARY KEY,
+ [name] NVARCHAR(255),
+ [email] NVARCHAR(255) UNIQUE,
+ [age] INT,
+ [active] BIT
+ )
+ END
+ """)
+
+ :ok
+ end
+
+ # Clean table before each test
+ setup do
+ TestRepo.delete_all(User)
+ :ok
+ end
+
+ # ---------------------------------------------------------------------------
+ # Tests
+ # ---------------------------------------------------------------------------
+
+ @tag :integration
+ test "Repo.query!/2 raw SQL works" do
+ result = TestRepo.query!("SELECT 1 AS [one]")
+ assert result.num_rows == 1
+ assert result.columns == ["one"]
+ assert result.rows == [[1]]
+ end
+
+ @tag :integration
+ test "Repo.insert/2 and Repo.all/1" do
+ assert {:ok, user} =
+ TestRepo.insert(%User{name: "Alice", email: "alice@example.com", age: 30, active: true})
+
+ assert user.id != nil
+ assert user.name == "Alice"
+
+ users = TestRepo.all(User)
+ assert length(users) == 1
+ assert hd(users).name == "Alice"
+ end
+
+ @tag :integration
+ test "Repo.get/2" do
+ {:ok, user} = TestRepo.insert(%User{name: "Bob", email: "bob@example.com", age: 25, active: false})
+ found = TestRepo.get(User, user.id)
+
+ assert found != nil
+ assert found.name == "Bob"
+ end
+
+ @tag :integration
+ test "Repo.update/2" do
+ {:ok, user} = TestRepo.insert(%User{name: "Charlie", email: "charlie@example.com", age: 20, active: true})
+
+ changeset = Ecto.Changeset.change(user, name: "Charles")
+ assert {:ok, updated} = TestRepo.update(changeset)
+ assert updated.name == "Charles"
+
+ found = TestRepo.get!(User, user.id)
+ assert found.name == "Charles"
+ end
+
+ @tag :integration
+ test "Repo.delete/1" do
+ {:ok, user} = TestRepo.insert(%User{name: "Dave", email: "dave@example.com", age: 40, active: false})
+
+ assert {:ok, _} = TestRepo.delete(user)
+ assert TestRepo.get(User, user.id) == nil
+ end
+
+ @tag :integration
+ test "Repo.all/1 with where clause" do
+ TestRepo.insert!(%User{name: "Eve", email: "eve@example.com", age: 22, active: true})
+ TestRepo.insert!(%User{name: "Frank", email: "frank@example.com", age: 17, active: false})
+
+ import Ecto.Query
+ adults = TestRepo.all(from(u in User, where: u.age >= 18))
+ assert length(adults) == 1
+ assert hd(adults).name == "Eve"
+ end
+
+ @tag :integration
+ test "Repo.update_all/2" do
+ TestRepo.insert!(%User{name: "Grace", email: "grace@example.com", age: 25, active: false})
+
+ import Ecto.Query
+ {1, _} = TestRepo.update_all(from(u in User, where: u.name == "Grace"), set: [active: true])
+
+ found = TestRepo.one!(from(u in User, where: u.name == "Grace"))
+ assert found.active == true
+ end
+
+ @tag :integration
+ test "Repo.delete_all/1 with where" do
+ TestRepo.insert!(%User{name: "Heidi", email: "heidi@example.com", age: 30, active: false})
+ TestRepo.insert!(%User{name: "Ivan", email: "ivan@example.com", age: 35, active: true})
+
+ import Ecto.Query
+ {1, _} = TestRepo.delete_all(from(u in User, where: u.active == false))
+
+ remaining = TestRepo.all(User)
+ assert length(remaining) == 1
+ assert hd(remaining).name == "Ivan"
+ end
+
+ @tag :integration
+ test "Repo.transaction/1 commit" do
+ result =
+ TestRepo.transaction(fn ->
+ TestRepo.insert!(%User{name: "Judy", email: "judy@example.com", age: 28, active: true})
+ :committed
+ end)
+
+ assert result == {:ok, :committed}
+ assert TestRepo.aggregate(User, :count) == 1
+ end
+
+ @tag :integration
+ test "Repo.transaction/1 rollback" do
+ result =
+ TestRepo.transaction(fn ->
+ TestRepo.insert!(%User{name: "Karl", email: "karl@example.com", age: 33, active: true})
+ TestRepo.rollback(:oops)
+ end)
+
+ assert result == {:error, :oops}
+ assert TestRepo.aggregate(User, :count) == 0
+ end
+
+ @tag :integration
+ test "boolean BIT round-trip" do
+ TestRepo.insert!(%User{name: "Laura", email: "laura@example.com", age: 25, active: true})
+ TestRepo.insert!(%User{name: "Mallory", email: "mallory@example.com", age: 25, active: false})
+
+ import Ecto.Query
+ actives = TestRepo.all(from(u in User, where: u.active == true))
+ assert length(actives) == 1
+ assert hd(actives).active == true
+ end
+end
diff --git a/test/ecto/query_test.exs b/test/ecto/query_test.exs
new file mode 100644
index 0000000..9ed9d87
--- /dev/null
+++ b/test/ecto/query_test.exs
@@ -0,0 +1,347 @@
+defmodule ExSqlClient.Ecto.QueryTest do
+ use ExUnit.Case, async: true
+
+ import Ecto.Query
+ alias ExSqlClient.Ecto.Connection
+
+ # ---------------------------------------------------------------------------
+ # Helpers
+ # ---------------------------------------------------------------------------
+
+ defp sql(iodata), do: IO.iodata_to_binary(iodata)
+
+ # Normalise an Ecto query through the planner so query.sources is populated.
+ defp plan(query, operation \\ :all) do
+ {query, _params, _key} = Ecto.Query.Planner.plan(query, operation, ExSqlClient.Ecto)
+ {query, _} = Ecto.Query.Planner.normalize(query, operation, ExSqlClient.Ecto, 0)
+ query
+ end
+
+ # ---------------------------------------------------------------------------
+ # Schema for tests
+ # ---------------------------------------------------------------------------
+
+ defmodule User do
+ use Ecto.Schema
+
+ schema "users" do
+ field(:name, :string)
+ field(:email, :string)
+ field(:age, :integer)
+ field(:active, :boolean)
+ timestamps()
+ end
+ end
+
+ defmodule Post do
+ use Ecto.Schema
+
+ schema "posts" do
+ field(:title, :string)
+ field(:body, :string)
+ belongs_to(:user, User)
+ timestamps()
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # all/1 — SELECT
+ # ---------------------------------------------------------------------------
+
+ describe "all/1 SELECT" do
+ test "simple select all fields" do
+ query = from(u in User) |> select([u], u) |> plan()
+ result = sql(Connection.all(query))
+
+ assert result =~ "SELECT"
+ assert result =~ "FROM [users]"
+ end
+
+ test "select specific field" do
+ query = from(u in User, select: u.name) |> plan()
+ result = sql(Connection.all(query))
+
+ assert result =~ "[name]"
+ assert result =~ "FROM [users]"
+ end
+
+ test "select with where clause" do
+ query = from(u in User, where: u.age > 18, select: u.name) |> plan()
+ result = sql(Connection.all(query))
+
+ assert result =~ "WHERE"
+ assert result =~ "[age]"
+ assert result =~ " > "
+ end
+
+ test "select with parameterised where clause" do
+ query = from(u in User, where: u.name == ^"Alice", select: u.name) |> plan()
+ result = sql(Connection.all(query))
+
+ assert result =~ "@1"
+ assert result =~ "WHERE"
+ end
+
+ test "TOP(n) for limit without offset" do
+ query = from(u in User, limit: 10, select: u.name) |> plan()
+ result = sql(Connection.all(query))
+
+ assert result =~ "TOP(10)"
+ refute result =~ "OFFSET"
+ end
+
+ test "OFFSET…FETCH for limit with offset" do
+ query = from(u in User, order_by: u.id, limit: 10, offset: 5, select: u.name) |> plan()
+ result = sql(Connection.all(query))
+
+ refute result =~ "TOP("
+ assert result =~ "OFFSET 5 ROW"
+ assert result =~ "FETCH NEXT 10 ROWS ONLY"
+ end
+
+ test "order_by asc" do
+ query = from(u in User, order_by: u.name, select: u.name) |> plan()
+ result = sql(Connection.all(query))
+
+ assert result =~ "ORDER BY"
+ assert result =~ "[name]"
+ end
+
+ test "order_by desc" do
+ query = from(u in User, order_by: [desc: u.name], select: u.name) |> plan()
+ result = sql(Connection.all(query))
+
+ assert result =~ "ORDER BY"
+ assert result =~ "[name] DESC"
+ end
+
+ test "inner join" do
+ query =
+ from(u in User,
+ join: p in Post,
+ on: p.user_id == u.id,
+ select: u.name
+ )
+ |> plan()
+
+ result = sql(Connection.all(query))
+
+ assert result =~ "INNER JOIN [posts]"
+ assert result =~ "ON"
+ end
+
+ test "left join" do
+ query =
+ from(u in User,
+ left_join: p in Post,
+ on: p.user_id == u.id,
+ select: u.name
+ )
+ |> plan()
+
+ result = sql(Connection.all(query))
+
+ assert result =~ "LEFT OUTER JOIN [posts]"
+ end
+
+ test "group_by" do
+ query = from(u in User, group_by: u.age, select: u.age) |> plan()
+ result = sql(Connection.all(query))
+
+ assert result =~ "GROUP BY"
+ assert result =~ "[age]"
+ end
+
+ test "having" do
+ query =
+ from(u in User,
+ group_by: u.age,
+ having: count(u.id) > 5,
+ select: u.age
+ )
+ |> plan()
+
+ result = sql(Connection.all(query))
+
+ assert result =~ "HAVING"
+ assert result =~ "count("
+ end
+
+ test "DISTINCT" do
+ query = from(u in User, distinct: true, select: u.name) |> plan()
+ result = sql(Connection.all(query))
+
+ assert result =~ "DISTINCT"
+ end
+
+ test "identifier quoting uses brackets" do
+ query = from(u in User, select: u.name) |> plan()
+ result = sql(Connection.all(query))
+
+ assert result =~ "[users]"
+ assert result =~ "[name]"
+ end
+
+ test "error when offset without order_by" do
+ query = from(u in User, limit: 10, offset: 5, select: u.name) |> plan()
+
+ assert_raise Ecto.QueryError, ~r/ORDER BY is mandatory/, fn ->
+ Connection.all(query)
+ end
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # insert/7
+ # ---------------------------------------------------------------------------
+
+ describe "insert/7" do
+ test "basic insert with OUTPUT INSERTED" do
+ result =
+ sql(
+ Connection.insert("dbo", "users", [:name, :email], [[:name, :email]], {:raise, [], []}, [
+ :id
+ ], [])
+ )
+
+ assert result =~ "INSERT INTO [dbo].[users]"
+ assert result =~ "([name],[email])"
+ assert result =~ "OUTPUT INSERTED.[id]"
+ assert result =~ "VALUES"
+ assert result =~ "@1"
+ assert result =~ "@2"
+ end
+
+ test "insert without returning" do
+ result =
+ sql(
+ Connection.insert(nil, "users", [:name], [[:name]], {:raise, [], []}, [], [])
+ )
+
+ assert result =~ "INSERT INTO [users]"
+ refute result =~ "OUTPUT"
+ assert result =~ "@1"
+ end
+
+ test "insert DEFAULT VALUES when no header" do
+ result = sql(Connection.insert(nil, "users", [], [], {:raise, [], []}, [:id], []))
+
+ assert result =~ "DEFAULT VALUES"
+ assert result =~ "OUTPUT INSERTED.[id]"
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # update/5
+ # ---------------------------------------------------------------------------
+
+ describe "update/5" do
+ test "basic update" do
+ result = sql(Connection.update(nil, "users", [:name, :email], [:id], []))
+
+ assert result =~ "UPDATE [users]"
+ assert result =~ "SET"
+ assert result =~ "[name] = @1"
+ assert result =~ "[email] = @2"
+ assert result =~ "WHERE [id] = @3"
+ end
+
+ test "update with OUTPUT INSERTED" do
+ result = sql(Connection.update(nil, "users", [:name], [:id], [:name]))
+
+ assert result =~ "OUTPUT INSERTED.[name]"
+ end
+
+ test "update with nil filter (IS NULL)" do
+ result = sql(Connection.update(nil, "users", [:name], [{:id, nil}], []))
+
+ assert result =~ "[id] IS NULL"
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # delete/4
+ # ---------------------------------------------------------------------------
+
+ describe "delete/4" do
+ test "basic delete" do
+ result = sql(Connection.delete(nil, "users", [:id], []))
+
+ assert result =~ "DELETE FROM [users]"
+ assert result =~ "WHERE [id] = @1"
+ end
+
+ test "delete with OUTPUT DELETED" do
+ result = sql(Connection.delete(nil, "users", [:id], [:id, :name]))
+
+ assert result =~ "OUTPUT DELETED.[id]"
+ assert result =~ "DELETED.[name]"
+ end
+
+ test "delete with nil filter (IS NULL)" do
+ result = sql(Connection.delete(nil, "users", [{:id, nil}], []))
+
+ assert result =~ "[id] IS NULL"
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # update_all/1
+ # ---------------------------------------------------------------------------
+
+ describe "update_all/1" do
+ test "basic update_all" do
+ query = from(u in User, update: [set: [name: "Bob"]]) |> plan(:update_all)
+ result = sql(Connection.update_all(query))
+
+ assert result =~ "UPDATE"
+ assert result =~ "SET"
+ assert result =~ "[name]"
+ assert result =~ "FROM [users]"
+ end
+
+ test "update_all with where clause" do
+ query =
+ from(u in User, where: u.active == true, update: [set: [name: "Bob"]])
+ |> plan(:update_all)
+
+ result = sql(Connection.update_all(query))
+
+ assert result =~ "WHERE"
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # delete_all/1
+ # ---------------------------------------------------------------------------
+
+ describe "delete_all/1" do
+ test "basic delete_all" do
+ query = from(u in User) |> plan(:delete_all)
+ result = sql(Connection.delete_all(query))
+
+ assert result =~ "DELETE"
+ assert result =~ "FROM [users]"
+ end
+
+ test "delete_all with where clause" do
+ query = from(u in User, where: u.age < 18) |> plan(:delete_all)
+ result = sql(Connection.delete_all(query))
+
+ assert result =~ "WHERE"
+ assert result =~ "[age]"
+ end
+ end
+
+ # ---------------------------------------------------------------------------
+ # table_exists_query/1
+ # ---------------------------------------------------------------------------
+
+ test "table_exists_query/1" do
+ {sql_str, params} = Connection.table_exists_query("users")
+
+ assert sql_str =~ "sys.tables"
+ assert sql_str =~ "@1"
+ assert params == ["users"]
+ end
+end