Building a Hall of Fame Leaderboard: Three Approaches Compared
Building a real-time leaderboard for gaming platforms presents unique challenges. When players are constantly generating wins across multiple games, you need a system that maintains top rankings efficiently while handling high-frequency updates. This article explores three different approaches to implementing a Hall of Fame feature that displays the top 25 players by highest wins for each game.
The Requirements
Our gaming platform has a straightforward architecture:
- Admins configure multiple games
- Users register and play various games
- Each game emits events for bets, wins, and rollbacks
We need to build a Hall of Fame widget that displays the top 25 users with the highest wins for each game. The solution must handle high-frequency updates, maintain accuracy across concurrent operations, and deliver sub-second read performance.
Approach 1: Cachex Distributed Cache
Cachex is an in-memory caching library for Elixir that offers distributed caching capabilities. The strategy here is to maintain the top 100 highest wins per game in cache, displaying only the top 25 to users.
Implementation
Store a sorted list of the top 100 wins per game in Cachex. When a new win occurs, check if it exceeds the minimum cached win, then update and re-sort if necessary:
defmodule HallOfFame.Cachex do
@cache_name :hall_of_fame
@top_count 100
def record_win(game_id, user_id, win_amount) do
cache_key = "game_#{game_id}"
# Get current top wins
current_top = Cachex.get!(@cache_name, cache_key) || []
# Check if win qualifies
if length(current_top) < @top_count or
win_amount > get_min_win(current_top) do
# Add new win and re-sort
updated = [{user_id, win_amount} | current_top]
|> Enum.sort_by(&elem(&1, 1), :desc)
|> Enum.take(@top_count)
Cachex.put(@cache_name, cache_key, updated)
end
end
def get_top_25(game_id) do
cache_key = "game_#{game_id}"
Cachex.get!(@cache_name, cache_key)
|> Enum.take(25)
end
defp get_min_win([]), do: 0
defp get_min_win(wins), do: elem(List.last(wins), 1)
end
Pros
- Extreme read performance: Sub-millisecond reads directly from memory
- Low latency: No network calls or database queries for reads
- Simple implementation: Straightforward logic without complex data structures
- Resource efficient: Minimal memory footprint for top 100 entries per game
Cons
- Cache synchronization complexity: In distributed environments with multiple nodes, cache updates require coordination. Without proper synchronization, different nodes may have inconsistent views of the leaderboard
- Race conditions: Concurrent writes from multiple processes can lead to lost updates. Two simultaneous wins might both read the same cached state, update independently, and overwrite each other
- Expensive sorting operations: Every qualifying win triggers a full sort of 100 entries, creating O(n log n) overhead on the write path
- Data persistence risk: Cache is volatile - a node restart loses all leaderboard data unless backed by persistent storage
- Manual cache warming: Cold starts require rebuilding the cache from database, adding complexity to deployment
When to Use
Cachex works well for single-node applications or when eventual consistency is acceptable. It's ideal for read-heavy workloads where leaderboards update infrequently and accuracy can tolerate slight delays.
Approach 2: PostgreSQL with Transactions
Using a dedicated hall_of_fame table, we can maintain exactly 25 top entries per game. Each win triggers a database transaction that checks if it qualifies and atomically updates the leaderboard.
Implementation
Create a table to store top wins and use transactions to ensure atomic updates:
CREATE TABLE hall_of_fame (
id BIGSERIAL PRIMARY KEY,
game_id BIGINT NOT NULL,
user_id BIGINT NOT NULL,
win_amount DECIMAL(20, 2) NOT NULL,
created_at TIMESTAMP DEFAULT NOW(),
UNIQUE(game_id, user_id)
);
CREATE INDEX idx_hall_of_fame_game_wins
ON hall_of_fame(game_id, win_amount DESC);
defmodule HallOfFame.Postgres do
import Ecto.Query
alias MyApp.Repo
def record_win(game_id, user_id, win_amount) do
Repo.transaction(fn ->
# Get minimum win for this game's hall of fame
min_win = from(h in HallOfFame,
where: h.game_id == ^game_id,
order_by: [asc: h.win_amount],
limit: 1,
select: h.win_amount)
|> Repo.one()
# Count current entries
count = from(h in HallOfFame,
where: h.game_id == ^game_id,
select: count(h.id))
|> Repo.one()
# Insert if hall of fame not full or win exceeds minimum
if count < 25 or (min_win && win_amount > min_win) do
# Insert or update user's win
%HallOfFame{}
|> HallOfFame.changeset(%{
game_id: game_id,
user_id: user_id,
win_amount: win_amount
})
|> Repo.insert(
on_conflict: [set: [win_amount: win_amount]],
conflict_target: [:game_id, :user_id]
)
# Remove lowest entry if we exceed 25
if count >= 25 do
from(h in HallOfFame,
where: h.game_id == ^game_id,
order_by: [asc: h.win_amount],
limit: 1)
|> Repo.delete_all()
end
end
end)
end
def get_top_25(game_id) do
from(h in HallOfFame,
where: h.game_id == ^game_id,
order_by: [desc: h.win_amount],
limit: 25)
|> Repo.all()
end
end
Pros
- ACID guarantees: Transactions ensure consistency and prevent race conditions
- Data persistence: Leaderboards survive crashes and restarts
- Familiar tooling: Leverage existing PostgreSQL infrastructure and monitoring
- Atomic operations: No risk of concurrent updates creating inconsistent state
- Backup and recovery: Standard database backup procedures protect leaderboard data
Cons
- Write performance overhead: Each win requires multiple queries in a transaction - checking minimum, counting entries, inserting/updating, and potentially deleting
- Lock contention: High-frequency updates to the same game's leaderboard create transaction serialization, reducing throughput
- Index maintenance cost: Every write updates indexes, adding I/O overhead
- Scalability limits: Database becomes a bottleneck under heavy concurrent load
- Network latency: Every operation requires round-trip to database server
When to Use
PostgreSQL excels when data consistency is paramount and write volume is moderate. It's the right choice for applications that already use PostgreSQL and can't tolerate eventual consistency or data loss.
Approach 3: Redis Sorted Sets
Redis sorted sets are purpose-built for leaderboards. Each member has an associated score, and Redis maintains them in sorted order automatically. This eliminates manual sorting and provides atomic operations.
Implementation
Use one sorted set per game, with user IDs as members and win amounts as scores:
defmodule HallOfFame.Redis do
def record_win(game_id, user_id, win_amount) do
key = "hall_of_fame:game:#{game_id}"
# Add/update user's win (atomic operation)
Redix.command(:redix, [
"ZADD", key, win_amount, user_id
])
# Keep only top 100 entries to manage memory
Redix.command(:redix, [
"ZREMRANGEBYRANK", key, 0, -101
])
end
def get_top_25(game_id) do
key = "hall_of_fame:game:#{game_id}"
# Get top 25 with scores (O(log(N) + M) complexity)
Redix.command(:redix, [
"ZREVRANGE", key, 0, 24, "WITHSCORES"
])
|> parse_results()
end
defp parse_results({:ok, results}) do
results
|> Enum.chunk_every(2)
|> Enum.map(fn [user_id, score] ->
{user_id, String.to_float(score)}
end)
end
end
Pros
- Optimal performance: O(log(N)) insertion, no manual sorting required
- Atomic operations: ZADD and ZREVRANGE are atomic, preventing race conditions
- Purpose-built data structure: Sorted sets are designed exactly for this use case
- Horizontal scalability: Redis Cluster enables sharding across multiple nodes
- High throughput: Can handle thousands of concurrent updates per second
- Simple codebase: Minimal application logic due to Redis built-in capabilities
- Memory efficiency: Configurable retention with ZREMRANGEBYRANK to control memory usage
Cons
- Additional infrastructure: Requires running and maintaining Redis servers
- Memory constraints: All data lives in memory - need capacity planning for all games and users
- Data persistence configuration: Requires setting up RDB/AOF persistence to prevent data loss
- Network dependency: Every operation requires network round-trip to Redis
- Monitoring complexity: Need separate monitoring for Redis health and performance
When to Use
Redis sorted sets are the optimal choice for high-traffic leaderboards requiring real-time updates and excellent read performance. If you're building a production gaming platform with thousands of concurrent players, this is the recommended approach.
Performance Comparison
For a typical gaming platform with 100 concurrent games and 1000 wins per second:
- Cachex: Reads under 1ms, writes 5-10ms (sorting overhead), potential inconsistency across nodes
- PostgreSQL: Reads 5-15ms (indexed query), writes 20-50ms (transaction overhead), guaranteed consistency
- Redis: Reads 1-3ms (network + lookup), writes 2-5ms (atomic operation), guaranteed consistency with high throughput
Conclusion
Each approach has distinct trade-offs:
Choose Cachex if you're running a single-node application or can accept eventual consistency. It's the simplest implementation with the fastest reads, but falls short in distributed environments.
Choose PostgreSQL if you prioritize data durability and consistency over raw performance, have moderate traffic, and want to minimize infrastructure dependencies. It's reliable but doesn't scale to high-frequency updates.
Choose Redis for production gaming platforms requiring real-time leaderboards with high concurrency. The infrastructure investment pays off with optimal performance, scalability, and purpose-built semantics for ranking data.
For the Hall of Fame feature described in the requirements, Redis sorted sets provide the best combination of performance, correctness, and scalability. The atomic operations, automatic sorting, and high throughput make it the clear winner for real-time leaderboards at scale.