Skip to content

Conversation

@h0lybyte
Copy link
Member

Upstream Sync

This PR contains the latest changes from the upstream repository.

Changes included:

  • Synced from upstream/main
  • Auto-generated by upstream sync workflow

Review checklist:

  • Review the changes for any breaking changes
  • Check for conflicts with local modifications
  • Verify tests pass (if applicable)

This PR was automatically created by the upstream sync workflow

filipecabaco and others added 30 commits September 2, 2025 15:45
currently the user would need to have enabled from the beginning of the channel. this will enable users to enable presence later in the flow by sending a track message which will enable presence messages for them
cowboy 2.13.0 set the default active_n=1
Currently all text frames as handled only with JSON which already requires UTF-8
This change reduces the impact of slow DB setup impacting other tenants
trying to connect at the same time that landed on the same partition
Verify that replication connection is able to reconnect when faced with WAL bloat issues
A new index was created on inserted_at DESC, topic WHERE private IS TRUE AND extension = "broadast"

The hardcoded limit is 25 for now.
Add a PubSub adapter that uses gen_rpc to send messages to other nodes.

It uses :gen_rpc.abcast/3 instead of :erlang.send/2

The adapter works very similarly to the PG2 adapter. It consists of
multiple workers that forward to the local node using PubSub.local_broadcast.

The way to choose the worker to be used is based on the sending process
just like PG2 adapter does

The number of workers is controlled by `:pool_size` or `:broadcast_pool_size`.
This distinction exists because Phoenix.PubSub uses `:pool_size` to
define how many partitions the PubSub registry will use. It's possible
to control them separately by using `:broadcast_pool_size`
---------

Co-authored-by: Eduardo Gurgel Pinho <eduardo.gurgel@supabase.io>
* fix: set max process heap size to 500MB instead of 8GB
* feat: set websocket transport max heap size

WEBSOCKET_MAX_HEAP_SIZE can be used to configure it
Issues:

* Single gen_rpc_dispatcher that can be a bottleneck if the connecting takes some time
* Many calls can land on the dispatcher but the node might be gone already. If we don't validate the node it might keep trying to connect until it times out instead of quickly giving up due to not being an actively connected node.
Include initial_call, ancestors, registered_name, message_queue_len and total_heap_size

Also bump long_schedule and long_gc
On bad connection, we rate limit the Connect module so we prevent abuses and too much logging of errors
Currently, whenever you push any commit to your branch, the old builds are still running and a new build is started. Once a new commit is added, the old test results no longer matter and it's just a waste of CI resources. Also reduces confusion with multiple builds running in parallel for the same branch/possibly blocking any merges.

With this little change, we ensure that whenever a new commit is added, the previous build is immediately canceled/stopped and only the build (latest commit) runs.
* fix: reduce max_frame_size to 5MB
* fix: fullsweep_after=100 on gen rpc pub sub workers

---------

Co-authored-by: Eduardo Gurgel Pinho <eduardo.gurgel@supabase.io>
edgurgel and others added 30 commits November 27, 2025 10:39
This approach prevents sending more information than needed reducing the amount of data send via gen_rpc. We also remove some unnecessary calls to primary in the delete controller operation
Avoid using the same container over and over
* Allow 3x more max heap size than other processes
* Avoid compressing given that gen_rpc does this for us already
* fix: remove limit_concurrent metric
* fix: revert the way we fetched tenants to report on connected metric
* fix: remove tenant tag on RPC metrics
* chore: bump version
It's usually 2x-3x faster than the Peep exporter due to the amount of
repeated metrics and tags
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants