Conversation
…sure there is only one SQL statement per init function.
Update getting started documentation
config/dev.exs
Outdated
| schema: "event_store", | ||
| column_data_type: "jsonb", | ||
| partitioned_events: true, | ||
| use_pg_partman: true |
There was a problem hiding this comment.
Event store supports, according to the docs, Postgres 9.5 an up.
I spun up Postgres 9.5 in a container with:
podman run --name some-postgres -e POSTGRES_PASSWORD=pass -e POSTGRES_USER=admin -d postgres:9.5.21
podman exec -it some-postgres bash
psql -h localhost -U admin
And ran select * from pg_available_extensions; and pgpartman was not part of the list.
I think use_pg_partman and partitioned_events should therefore be set to false in every configuration as it may not be available for every version that event_store supports.
There was a problem hiding this comment.
This obviously impacts the test suite's actual scope (if you disable the option, you wouldn't be testing how partitioning affects the validity of the code). I think we should discuss a testing strategy that aligns with the official library's own support goals.
- Proposition 1: Rerun the whole test suite with the option on and off
- Proposition 2: Create new tests for the enabled option
I'm personally more inclined to go with the first (because it means you can verifiably say the configuration option does not impact the original implementation in any way).
There was a problem hiding this comment.
If the idea is to use this fork in Ublo directly before contributing the changes upstream, I think we can pick this conversation at a later time.
There was a problem hiding this comment.
I agree with you partitioned_events and use_pg_partman should be set to false. I have made the change.
There was a problem hiding this comment.
This obviously impacts the test suite's actual scope (if you disable the option, you wouldn't be testing how partitioning affects the validity of the code). I think we should discuss a testing strategy that aligns with the official library's own support goals.
- Proposition 1: Rerun the whole test suite with the option on and off
- Proposition 2: Create new tests for the enabled option
I'm personally more inclined to go with the first (because it means you can verifiably say the configuration option does not impact the original implementation in any way).
In the current implementation, we have 3 possible combinations:
partitioned_eventsandpg_partmanare bothfalse, which means it should work as the original implementationpartitioned_eventsistrueandpg_partmanisfalse, which means the events table is partitioned but we don't use pg_partman to manage partitionspartitioned_eventsandpg_partmanare bothtrue, which means pg_partman must be installed, the events table is partitioned and partitions are managed by pg_partman
For testing, I chose an approach where the same test suite works successfully accros all the configuration. Instead of having one set of tests per configuration combination, we use a single test suite that is compatible with all configuration combinations.
There was a problem hiding this comment.
For testing, I chose an approach where the same test suite works successfully accros all the configuration.
Does that mean you've configured the test suite to run with each combination?
There was a problem hiding this comment.
It means that if you change the combination and you run test suite, the tests will create events table according what you configured and test with it. You don't have do change any thing in the test suit, all the tests work with all the possible combinations. If you never need have a partitioned events table, you never need to test it. I applied my change at the lowest level possible so it's transparent for the test. If conf parameter partitioned_events is set, the test will automaticaly pass the parameter to the eventstore functions which need it.
There was a problem hiding this comment.
Okay, I understand that a little bit better. One more question: Wouldn't you have to constantly turn the feature on and off to ensure the codebase is working correctly for all configuration options following a specific change? I was thinking we'd automate that (we could run it asynchronously if the partitioned database can run concurrently with the non-partitioned database, which means we wouldn't affect the suite's performance by all that much).
|
I'm going to have a second look tomorrow morning. |
config/bench.exs
Outdated
| pool_size: 10, | ||
| serializer: EventStore.TermSerializer | ||
| serializer: EventStore.TermSerializer, | ||
| partitioned_events: true, # Default false, set to true if you want a partioned events table |
There was a problem hiding this comment.
Comment longer reflects reality:
| partitioned_events: true, # Default false, set to true if you want a partioned events table | |
| partitioned_events: true, |
lib/event_store/sql/statements.ex
Outdated
|
|
||
| @external_resource file | ||
|
|
||
| #EEx.function_from_file(:def, fun, file, args ++ [:partitioned], engine: EventStore.EExIOListEngine) |
There was a problem hiding this comment.
| #EEx.function_from_file(:def, fun, file, args ++ [:partitioned], engine: EventStore.EExIOListEngine) |
| {:soft_delete_stream, [:schema]}, | ||
| {:hard_delete_stream, [:schema]}, | ||
| {:hard_delete_stream, [:schema, :partitioned]}, |
There was a problem hiding this comment.
soft_delete_stream doesn't take :partitioned as an argument, is that intentional?
There was a problem hiding this comment.
Yes! soft_delete_stream just update the line and set deleted_at field at NOW() when hard_delete_stream delete rows form several tables including events_root table if we have a partitioned events table.
| %{conn: conn, schema: schema} = context | ||
|
|
||
| Appender.append(conn, stream_id, recorded_events, schema: schema) | ||
| partitioned = Application.get_env(:eventstore, EventStore)[:partitioned_events] || false |
There was a problem hiding this comment.
Shouldn't this option come from context?
test/streams/all_stream_test.exs
Outdated
| events = EventFactory.create_events(3) | ||
|
|
||
| :ok = Stream.append_to_stream(conn, stream_uuid, 0, events, opts) | ||
| #IO.inspect(opts) |
There was a problem hiding this comment.
| #IO.inspect(opts) |
| {expected_version, opts} = Keyword.pop(opts, :expected_version) | ||
| {created_at, opts} = Keyword.pop(opts, :created_at_override) | ||
| partitioned = Keyword.get(opts, :partitioned_events, false) | ||
| {debug, opts} = Keyword.pop(opts, :debug) |
There was a problem hiding this comment.
You should use Logger.level/0 to retrieve the level and print your messages instead of passing debug to the keyword list.
| when length(events) < 1000 do | ||
| {serializer, new_opts} = Keyword.pop(opts, :serializer) | ||
|
|
||
| CREATE TRIGGER no_delete_events | ||
| BEFORE DELETE ON #{schema}.events | ||
| FOR EACH STATEMENT | ||
| EXECUTE PROCEDURE #{schema}.event_store_delete('Cannot delete events'); |
There was a problem hiding this comment.
| CREATE TRIGGER no_delete_events | |
| BEFORE DELETE ON #{schema}.events | |
| FOR EACH STATEMENT | |
| EXECUTE PROCEDURE #{schema}.event_store_delete('Cannot delete events'); | |
| CREATE TRIGGER no_delete_events | |
| BEFORE DELETE ON #{schema}.events | |
| FOR EACH STATEMENT | |
| EXECUTE PROCEDURE #{schema}.event_store_delete('Cannot delete events') |
Allow EventStore to work with an events table partitioned and use pg_partman (a PostgreSQL extension) to automaticaly manage partitions.