Advanced Topics: Redis Modules and Beyond

While Redis’s core data structures (Strings, Hashes, Lists, Sets, Sorted Sets, Streams) are incredibly powerful, there are many specialized data processing needs that go beyond them. This is where Redis Modules shine.

Historically, Redis Modules were separate add-ons that extended Redis’s functionality. With the release of Redis Open Source 8.x, many of these powerful features have been integrated directly into the Redis core distribution (or are easily available via Redis Stack, which bundles them). This dramatically simplifies deployment and unlocks new capabilities, especially in areas like AI, real-time analytics, and search.

In this chapter, we’ll explore some of the most impactful Redis Modules and advanced features now available:

  • RedisJSON: For efficient storage and querying of JSON documents.
  • Redis Stack’s Query Engine (formerly RediSearch): Full-text search, secondary indexing, and vector search.
  • RedisTimeSeries: For high-volume time-series data ingestion and querying.
  • RedisBloom: Probabilistic data structures like Bloom filters and Cuckoo filters for memory-efficient membership testing.
  • Geospatial capabilities: Advanced location-based queries using core Redis features.

To easily get these features, we highly recommend running Redis using the redis/redis-stack-server Docker image, which bundles all these modules.

1. RedisJSON: Managing Document Data

Storing JSON data in Redis historically involved serializing it to a string (JSON.stringify) and storing it as a Redis String. This required reading the entire string, deserializing, modifying, serializing again, and writing back – an inefficient process for partial updates.

RedisJSON provides native support for storing, updating, and querying JSON documents directly within Redis. It treats JSON as a first-class data type.

Key features:

  • Native JSON type: Stores JSON documents directly, not as strings.
  • JSONPath support: Allows accessing and modifying parts of a JSON document using JSONPath syntax.
  • Atomic partial updates: Update specific fields within a JSON document without reading/writing the whole thing.
  • Memory efficiency: Optimized storage for JSON.

Common RedisJSON Commands:

  • JSON.SET key path value [NX|XX]: Sets a JSON value at path within key.
    • NX: Only set if path does not exist.
    • XX: Only set if path already exists.
  • JSON.GET key [path ...]: Retrieves JSON values from key at specified paths.
  • JSON.DEL key [path]: Deletes a JSON value at path.
  • JSON.ARRAPPEND key path value [value ...]: Appends values to a JSON array.
  • JSON.NUMINCRBY key path value: Increments a number at path.

Node.js Example:

// redis_json_demo.js
const Redis = require('ioredis');
// For RedisJSON commands, ioredis automatically handles them with the Redis 8.x client
// If using an older Redis Stack client (e.g. before ioredis v5), you might need specific module clients.
const redis = new Redis();

async function runRedisJSONExamples() {
  const userDocKey = 'user:profile:doc:123';
  await redis.del(userDocKey); // Clear old data

  try {
    console.log('--- RedisJSON Examples ---');

    // 1. JSON.SET: Create a JSON document
    await redis.json.set(userDocKey, '$', JSON.stringify({
      name: 'Alice Wonderland',
      email: 'alice@example.com',
      age: 30,
      address: {
        street: '123 Rabbit Hole',
        city: 'Wonderland'
      },
      interests: ['reading', 'tea parties', 'chess'],
      status: 'active'
    }));
    console.log(`\nCreated user JSON document for ${userDocKey}.`);
    console.log(await redis.json.get(userDocKey, '$')); // Get the whole document

    // 2. JSON.GET: Retrieve specific fields
    let userName = await redis.json.get(userDocKey, '$.name');
    let userCity = await redis.json.get(userDocKey, '$.address.city');
    console.log(`\nUser Name: ${JSON.parse(userName)}, City: ${JSON.parse(userCity)}`);

    // 3. JSON.SET (partial update): Update age
    await redis.json.set(userDocKey, '$.age', '31');
    console.log(`\nUpdated age. New age: ${JSON.parse(await redis.json.get(userDocKey, '$.age'))}`);

    // 4. JSON.ARRAPPEND: Add a new interest
    await redis.json.arrappend(userDocKey, '$.interests', JSON.stringify('gardening'));
    console.log(`\nAdded interest. All interests: ${JSON.parse(await redis.json.get(userDocKey, '$.interests'))}`);

    // 5. JSON.NUMINCRBY: Increment age (requires numerical value)
    await redis.json.numincrby(userDocKey, '$.age', 1);
    console.log(`\nIncremented age. New age: ${JSON.parse(await redis.json.get(userDocKey, '$.age'))}`);

    // 6. JSON.DEL: Delete a field
    await redis.json.del(userDocKey, '$.status');
    console.log(`\nDocument after deleting 'status': ${await redis.json.get(userDocKey, '$')}`);

  } catch (err) {
    console.error('Error in RedisJSON examples:', err);
  } finally {
    await redis.del(userDocKey);
    await redis.quit();
  }
}

// runRedisJSONExamples();

Python Example:

# redis_json_demo.py
import redis
import json

r = redis.Redis(host='localhost', port=6379, db=0)

def run_redis_json_examples_py():
    user_doc_key = 'user:profile:doc:py:456'
    r.delete(user_doc_key)

    try:
        print('--- RedisJSON Examples (Python) ---')

        # 1. JSON.SET: Create a JSON document
        r.json().set(user_doc_key, '$', {
            'name': 'Bob The Builder',
            'email': 'bob@example.com',
            'age': 40,
            'address': {
                'street': '789 Construction Ave',
                'city': 'Builderville'
            },
            'tools': ['hammer', 'screwdriver'],
            'status': 'active'
        })
        print(f"\nCreated user JSON document for {user_doc_key}.")
        print(r.json().get(user_doc_key, '$')) # Get the whole document

        # 2. JSON.GET: Retrieve specific fields
        user_name = r.json().get(user_doc_key, '$.name')
        user_city = r.json().get(user_doc_key, '$.address.city')
        print(f"\nUser Name: {user_name}, City: {user_city}")

        # 3. JSON.SET (partial update): Update age
        r.json().set(user_doc_key, '$.age', 41)
        print(f"\nUpdated age. New age: {r.json().get(user_doc_key, '$.age')}")

        # 4. JSON.ARRAPPEND: Add a new tool
        r.json().arrappend(user_doc_key, '$.tools', 'wrench')
        print(f"\nAdded tool. All tools: {r.json().get(user_doc_key, '$.tools')}")

        # 5. JSON.NUMINCRBY: Increment age
        r.json().numincrby(user_doc_key, '$.age', 1)
        print(f"\nIncremented age. New age: {r.json().get(user_doc_key, '$.age')}")

        # 6. JSON.DEL: Delete a field
        r.json().delete(user_doc_key, '$.status')
        print(f"\nDocument after deleting 'status': {r.json().get(user_doc_key, '$')}")

    except Exception as e:
        print(f"Error in RedisJSON examples (Python): {e}")
    finally:
        r.delete(user_doc_key)
        r.close()

# run_redis_json_examples_py()

2. Redis Stack’s Query Engine (formerly RediSearch): Search and Secondary Indexing

The Query Engine, often bundled as part of Redis Stack, transforms Redis into a robust secondary index and search engine. It allows you to build complex queries against various data types, including JSON documents, Hashes, and even text fields, with full-text search capabilities. This is particularly relevant for Redis 8.x as its capabilities have expanded significantly.

Key features:

  • Full-Text Search: Fast, relevant search on text fields with stemming, fuzziness, and language support.
  • Secondary Indexing: Create indexes on numerical, tag, or geospatial fields to enable powerful filtering and range queries.
  • Vector Search: (A major new feature in modern Redis) Store and query vector embeddings for semantic search, similarity matching, and RAG (Retrieval Augmented Generation) architectures in AI applications.
  • Aggregations: Perform SQL-like aggregations (GROUP BY, SUM, COUNT) on indexed data.

Core Query Engine Commands (Simplified):

  • FT.CREATE index_name SCHEMA ...: Creates an index.
  • FT.SEARCH index_name query_string [PARAMS ...]: Performs a search.
  • FT.INFO index_name: Gets information about an index.

Node.js Example (Conceptual - requires redis/redis-stack-server):

To use Query Engine features effectively, you typically interact with it via client libraries that abstract the commands. For Node.js, ioredis (with a newer version) can execute these commands directly, or dedicated clients like node-redis-search might be used for more advanced features or older Redis versions.

// redis_query_engine_demo.js
const Redis = require('ioredis');
const redis = new Redis(); // Connects to Redis Stack server

async function runQueryEngineExamples() {
  const indexName = 'usersIdx';
  const userPrefix = 'user:profile:';

  try {
    console.log('--- Redis Query Engine Examples ---');

    // Clean up previous index if exists
    await redis.call('FT.DROPINDEX', indexName).catch(() => {});
    await redis.del(`${userPrefix}1`, `${userPrefix}2`, `${userPrefix}3`, `${userPrefix}4`);

    // 1. FT.CREATE: Create an index on JSON documents
    // Schema defines what fields to index and their types (TEXT, NUMERIC, TAG, VECTOR)
    await redis.call(
      'FT.CREATE', indexName,
      'ON', 'JSON', // Index JSON documents
      'PREFIX', '1', userPrefix, // Only index keys starting with 'user:profile:'
      'SCHEMA',
      '$.name', 'AS', 'name', 'TEXT',
      '$.age', 'AS', 'age', 'NUMERIC', 'SORTABLE',
      '$.city', 'AS', 'city', 'TAG',
      '$.bio', 'AS', 'bio', 'TEXT',
      '$.interests', 'AS', 'interests', 'TAG', 'SEPARATOR', ','
      // For vector search, you'd add: '$.embedding', 'AS', 'vector', 'VECTOR', 'FLAT', '6', 'TYPE', 'FLOAT32', 'DIM', '128', 'DISTANCE_METRIC', 'COSINE'
    );
    console.log(`\nCreated index '${indexName}'.`);

    // 2. Add some JSON data that will be indexed
    await redis.json.set(`${userPrefix}1`, '$', JSON.stringify({
      name: 'Alice Smith', age: 30, city: 'New York', bio: 'Loves coding and hiking.', interests: 'coding,hiking,reading'
    }));
    await redis.json.set(`${userPrefix}2`, '$', JSON.stringify({
      name: 'Bob Johnson', age: 25, city: 'London', bio: 'Enjoys music and gaming.', interests: 'music,gaming'
    }));
    await redis.json.set(`${userPrefix}3`, '$', JSON.stringify({
      name: 'Charlie Brown', age: 35, city: 'New York', bio: 'Passionate about photography.', interests: 'photography,art'
    }));
    await redis.json.set(`${userPrefix}4`, '$', JSON.stringify({
      name: 'David Lee', age: 30, city: 'Paris', bio: 'Into web development and travel.', interests: 'coding,travel'
    }));
    console.log('\nAdded sample user data.');

    // Give Redis some time to index (usually very fast)
    await new Promise(resolve => setTimeout(resolve, 100));

    // 3. FT.SEARCH: Perform a simple full-text search
    console.log('\n--- Search: Users interested in "coding" ---');
    let searchResults = await redis.call('FT.SEARCH', indexName, '@interests:{coding}', 'RETURN', 2, 'name', 'city');
    console.log(`Found ${searchResults[0]} result(s):`);
    for (let i = 2; i < searchResults.length; i += 2) {
      console.log(`  ID: ${searchResults[i]}, Data: ${JSON.parse(searchResults[i + 1][1])} from ${JSON.parse(searchResults[i + 1][3])}`);
    }

    // 4. FT.SEARCH: Search with numeric filters (age range)
    console.log('\n--- Search: Users between 25 and 30, sorted by age ---');
    searchResults = await redis.call('FT.SEARCH', indexName, '@age:[25 30]', 'SORTBY', 'age', 'ASC', 'RETURN', 2, 'name', 'age');
    console.log(`Found ${searchResults[0]} result(s):`);
    for (let i = 2; i < searchResults.length; i += 2) {
      console.log(`  ID: ${searchResults[i]}, Name: ${JSON.parse(searchResults[i + 1][1])}, Age: ${JSON.parse(searchResults[i + 1][3])}`);
    }

    // 5. FT.SEARCH: Combine full-text and tag filters
    console.log('\n--- Search: Users in "New York" with "hiking" in their bio ---');
    searchResults = await redis.call('FT.SEARCH', indexName, '(@city:{New York}) (@bio:hiking)', 'RETURN', 2, 'name', 'bio');
    console.log(`Found ${searchResults[0]} result(s):`);
    for (let i = 2; i < searchResults.length; i += 2) {
      console.log(`  ID: ${searchResults[i]}, Name: ${JSON.parse(searchResults[i + 1][1])}, Bio: ${JSON.parse(searchResults[i + 1][3])}`);
    }

  } catch (err) {
    console.error('Error in Redis Query Engine examples:', err);
  } finally {
    // Clean up index and data
    await redis.call('FT.DROPINDEX', indexName).catch(() => {});
    await redis.del(`${userPrefix}1`, `${userPrefix}2`, `${userPrefix}3`, `${userPrefix}3`, `${userPrefix}4`);
    await redis.quit();
  }
}

// Ensure you are running redis/redis-stack-server for this to work
// runQueryEngineExamples();

3. RedisTimeSeries: High-Volume Time-Series Data

RedisTimeSeries is a Redis Module designed for managing large volumes of time-series data efficiently. It provides specialized commands for appending data points, querying ranges, downsampling, and aggregation. Ideal for IoT, financial data, and monitoring.

Key features:

  • High ingestion rates: Optimized for writing millions of data points per second.
  • Low latency queries: Fast retrieval of time-series data over time ranges.
  • Aggregations: Built-in support for downsampling and aggregations (e.g., min, max, sum, avg) over specified time buckets.
  • Labels: Attach labels to time series for secondary indexing and querying.

Core RedisTimeSeries Commands:

  • TS.CREATE key [RETENTION milliseconds] [LABELS field value ...]
  • TS.ADD key timestamp value [LABELS field value ...]
  • TS.RANGE key fromTimestamp toTimestamp [AGGREGATION type bucketSize]

Python Example (Conceptual - requires redis/redis-stack-server):

# redis_timeseries_demo.py
import redis
import time
import random

r = redis.Redis(host='localhost', port=6379, db=0)

def run_redis_timeseries_examples_py():
    temp_sensor_key = 'temp:sensor:kitchen'
    cpu_usage_key = 'cpu:server:web01'
    
    try:
        print('--- RedisTimeSeries Examples (Python) ---')

        # Clean up existing time series
        r.delete(temp_sensor_key, cpu_usage_key)

        # 1. TS.CREATE: Create time series
        # Set a retention of 1 hour (3600000 ms) for temp sensor
        r.ts().create(temp_sensor_key, retention_msecs=3600000, labels={'location': 'kitchen', 'sensor_type': 'temperature'})
        # No retention for CPU usage, for longer history
        r.ts().create(cpu_usage_key, labels={'host': 'web01', 'metric_type': 'cpu_usage'})
        print(f"\nCreated time series: '{temp_sensor_key}' and '{cpu_usage_key}'.")

        # 2. TS.ADD: Add data points
        current_time_ms = int(time.time() * 1000)
        for i in range(10): # Add 10 data points
            timestamp = current_time_ms + i * 1000 # 1 second apart
            temp_value = round(random.uniform(20.0, 25.0), 2)
            cpu_value = round(random.uniform(10.0, 80.0), 2)

            r.ts().add(temp_sensor_key, timestamp, temp_value)
            r.ts().add(cpu_usage_key, timestamp, cpu_value)
            print(f"  Added data for {timestamp}: Temp={temp_value}, CPU={cpu_value}")
            time.sleep(0.1) # Simulate real-time data flow

        # Add some more data for aggregation demonstration later
        for i in range(10, 20):
            timestamp = current_time_ms + i * 1000
            temp_value = round(random.uniform(25.0, 30.0), 2)
            cpu_value = round(random.uniform(50.0, 95.0), 2)
            r.ts().add(temp_sensor_key, timestamp, temp_value)
            r.ts().add(cpu_usage_key, timestamp, cpu_value)
            time.sleep(0.05)


        # 3. TS.RANGE: Retrieve all data points
        print(f"\n--- Retrieving all temperature data for '{temp_sensor_key}' ---")
        temp_data = r.ts().range(temp_sensor_key, '-', '+')
        for ts, val in temp_data:
            print(f"  Timestamp: {ts}, Value: {val}")

        # 4. TS.RANGE with AGGREGATION: Get average CPU usage every 5 seconds
        print(f"\n--- Average CPU usage for '{cpu_usage_key}' (5-second aggregation) ---")
        # Use a specific time range relative to current_time_ms to ensure results
        aggregated_cpu = r.ts().range(cpu_usage_key, current_time_ms, current_time_ms + 20000, aggregation_type='avg', bucket_size_msec=5000)
        for ts, val in aggregated_cpu:
            print(f"  Timestamp: {ts}, Avg CPU: {float(val):.2f}")

    except Exception as e:
        print(f"Error in RedisTimeSeries examples (Python): {e}")
    finally:
        r.delete(temp_sensor_key, cpu_usage_key)
        r.close()

# Ensure you are running redis/redis-stack-server for this to work
# run_redis_timeseries_examples_py()

4. RedisBloom: Probabilistic Data Structures

RedisBloom offers highly memory-efficient probabilistic data structures, useful when you need to answer “probably yes / definitely no” type of questions, or count unique items without exact precision.

Key structures:

  • Bloom Filter: For efficient membership testing (e.g., “Has this user seen this ad before?” or “Is this email already registered?”). Low false positive rate, no false negatives.
  • Cuckoo Filter: Similar to Bloom, but also supports item deletion.
  • Count-Min Sketch: For frequency counting (e.g., “How many times has this item appeared?”). Small error margins.
  • Top-K: Identify the most frequent items in a stream.
  • HyperLogLog (HLL): For approximate counting of unique items (e.g., unique visitors). Extremely memory efficient. Note: HLL is part of core Redis, not a module.

Core RedisBloom Commands:

  • BF.ADD key item: Adds an item to a Bloom filter.
  • BF.EXISTS key item: Checks if an item exists in a Bloom filter.
  • CMS.INITBYDIM key width depth: Initializes a Count-Min Sketch.
  • CMS.INCRBY key item count [item count ...]: Increments count for item(s).
  • CMS.QUERY key item [item ...]: Queries count for item(s).

Node.js Example (Bloom Filter and HyperLogLog):

// redis_bloom_hll_demo.js
const Redis = require('ioredis');
const redis = new Redis();

async function runBloomHLLExamples() {
  const seenItemsKey = 'seen:items:bloom';
  const uniqueVisitorsKey = 'unique:visitors:hll';
  
  try {
    console.log('--- RedisBloom (and HLL) Examples ---');
    await redis.del(seenItemsKey, uniqueVisitorsKey); // Clear previous data

    // 1. Bloom Filter (BF.ADD, BF.EXISTS)
    // BF.RESERVE key error_rate capacity (create if not exists)
    await redis.call('BF.RESERVE', seenItemsKey, 0.01, 1000); // 1% error rate, 1000 items capacity
    console.log(`\nCreated Bloom Filter '${seenItemsKey}'.`);

    await redis.call('BF.ADD', seenItemsKey, 'product:A');
    await redis.call('BF.ADD', seenItemsKey, 'product:B');
    await redis.call('BF.ADD', seenItemsKey, 'user:123:promo');
    console.log("Added 'product:A', 'product:B', 'user:123:promo' to Bloom Filter.");

    console.log(`Does 'product:A' exist? ${await redis.call('BF.EXISTS', seenItemsKey, 'product:A') ? 'Yes' : 'No'}`); // Yes
    console.log(`Does 'product:C' exist? ${await redis.call('BF.EXISTS', seenItemsKey, 'product:C') ? 'Yes' : 'No'}`); // No
    console.log(`Does 'user:456:promo' exist? ${await redis.call('BF.EXISTS', seenItemsKey, 'user:456:promo') ? 'Yes' : 'No'}`); // No

    // 2. HyperLogLog (PFADD, PFCOUNT) - Core Redis feature, not a module
    console.log('\n--- HyperLogLog Examples ---');

    await redis.pfadd(uniqueVisitorsKey, 'user:1', 'user:2', 'user:1', 'user:3');
    console.log("Added 'user:1', 'user:2', 'user:1', 'user:3' to HLL.");
    console.log(`Approx. unique visitors: ${await redis.pfcount(uniqueVisitorsKey)}`); // Output: 3

    await redis.pfadd(uniqueVisitorsKey, 'user:4', 'user:5');
    console.log("Added 'user:4', 'user:5' to HLL.");
    console.log(`Approx. unique visitors: ${await redis.pfcount(uniqueVisitorsKey)}`); // Output: 5

    // PFCOUNT can count multiple HLLs
    const anotherHLLKey = 'unique:visitors:hll_tomorrow';
    await redis.del(anotherHLLKey);
    await redis.pfadd(anotherHLLKey, 'user:3', 'user:6');
    console.log(`Approx. unique visitors across both HLLs: ${await redis.pfcount(uniqueVisitorsKey, anotherHLLKey)}`); // Output: 6 (user:3 is counted once)

  } catch (err) {
    console.error('Error in RedisBloom/HLL examples:', err);
  } finally {
    await redis.del(seenItemsKey, uniqueVisitorsKey);
    await redis.quit();
  }
}

// Ensure you are running redis/redis-stack-server for RedisBloom features
// runBloomHLLExamples();

5. Geospatial Capabilities (Core Redis)

Redis’s core data structures (specifically Sorted Sets) can be leveraged to store and query geospatial data efficiently. The GEOADD, GEORADIUS/GEOSEARCH commands allow you to store longitude/latitude pairs and query for points within a given radius or bounding box.

Key features:

  • Store location data: Associate members with precise longitude and latitude.
  • Radius search: Find all items within a specified radius of a given point.
  • Bounding box search: Find all items within a rectangular area.
  • Distance calculation: Calculate the distance between two stored points.

Core Geospatial Commands:

  • GEOADD key longitude latitude member [longitude latitude member ...]
  • GEODIST key member1 member2 [unit]
  • GEOSEARCH key [FROMMEMBER member | FROMLONLAT longitude latitude] [BYRADIUS radius unit | BYBOX width height unit] [ASC|DESC] [COUNT count] [WITHCOORD] [WITHDIST] [WITHHASH]

Python Example:

# redis_geospatial_demo.py
import redis

r = redis.Redis(host='localhost', port=6379, db=0)

def run_redis_geospatial_examples_py():
    stores_key = 'stores:locations'
    await r.delete(stores_key); # Ensure delete is awaited

    try:
        print('--- Redis Geospatial Examples (Python) ---')

        # 1. GEOADD: Add store locations (longitude latitude member)
        r.geoadd(stores_key, [
            (13.3777, 52.5162, 'Berlin Store'),     # Berlin
            (-0.1278, 51.5074, 'London Store'),     # London
            (2.3522, 48.8566, 'Paris Store'),      # Paris
            (12.4964, 41.9028, 'Rome Store'),       # Rome
            (13.0645, 77.5806, 'Bangalore Store')  # Bangalore (added for distant search)
        ])
        print(f"\nAdded store locations to '{stores_key}'.")

        # 2. GEODIST: Calculate distance between two stores
        distance_berlin_paris = r.geodist(stores_key, 'Berlin Store', 'Paris Store', unit='km')
        print(f"\nDistance between Berlin and Paris Stores: {distance_berlin_paris:.2f} km")

        # 3. GEOSEARCH (FROMLONLAT BYRADIUS): Find stores within a radius
        print("\n--- Stores near Paris (500km radius) ---")
        # Redis 8.x GEORADIUS is deprecated in favor of GEOSEARCH
        # For redis-py, geo_radius (older) or geo_search (newer)
        
        # Using geo_search for modern Redis (from redis-py 4.0+)
        # FROM: (longitude, latitude)
        # BYRADIUS: radius in km
        # WITHDIST: Include distance from center
        # WITHCOORD: Include coordinates
        # ASC: Sort by distance ascending
        stores_near_paris = r.geosearch(
            stores_key,
            longitude=2.3522, latitude=48.8566, # Center: Paris
            radius=500, unit='km',
            withdist=True, withcoord=True, sort='ASC'
        )
        for name, data in stores_near_paris:
            dist = data['dist']
            coord = data['coord']
            print(f"  Store: {name.decode('utf-8')}, Distance: {dist:.2f} km, Coordinates: {coord}")
        # Expected: Paris Store, London Store, Rome Store, Berlin Store

        # 4. GEOSEARCH (FROMMEMBER BYRADIUS): Find stores near another store
        print("\n--- Stores near London (400km radius) ---")
        stores_near_london = r.geosearch(
            stores_key,
            member='London Store',
            radius=400, unit='km',
            withdist=True, withcoord=True
        )
        for name, data in stores_near_london:
            dist = data['dist']
            coord = data['coord']
            print(f"  Store: {name.decode('utf-8')}, Distance: {dist:.2f} km, Coordinates: {coord}")
        # Expected: London Store, Paris Store

    except Exception as e:
        print(f"Error in Redis Geospatial examples (Python): {e}")
    finally:
        r.delete(stores_key)
        r.close()

# run_redis_geospatial_examples_py()

Full Node.js Example with Advanced Features

// full_advanced_features.js
const Redis = require('ioredis');
const redis = new Redis();

async function runAllAdvancedExamples() {
  console.log('--- Running All Advanced Redis Features Examples ---');

  // --- RedisJSON ---
  console.log('\n### RedisJSON Demo ###');
  await runRedisJSONExamples();

  // --- Redis Query Engine (Search & Indexing) ---
  console.log('\n### Redis Query Engine Demo (requires redis-stack-server) ###');
  await runQueryEngineExamples();

  // --- RedisTimeSeries (Python example only here, but conceptually similar) ---
  console.log('\n### RedisTimeSeries (Conceptual - check Python for full code) ###');
  console.log('RedisTimeSeries allows high-speed ingestion and querying of time-series data.');
  console.log('For example, you could track sensor readings or stock prices over time.');
  console.log('TS.ADD my_temp_sensor 1678886400000 22.5');
  console.log('TS.RANGE my_temp_sensor - + AGGREGATION avg 60');

  // --- RedisBloom (and HyperLogLog) ---
  console.log('\n### RedisBloom (Bloom Filter & HLL) Demo ###');
  await runBloomHLLExamples();

  // --- Geospatial Capabilities ---
  console.log('\n### Redis Geospatial Demo (Conceptual - check Python for full code) ###');
  console.log('Redis can store geographic coordinates and query for points within a radius.');
  console.log('GEOADD locations -0.1278 51.5074 "London"');
  console.log('GEOSEARCH locations FROMMEMBER "London" BYRADIUS 50 km WITHCOORD');


  console.log('\n--- All Advanced Redis Features Examples Complete ---');
  await redis.quit();
}

// Call the main function to execute all examples
// Note: Each sub-function will connect and disconnect. For production, manage connections.
// To run this, ensure Redis Stack server is running.
// If you uncomment, make sure to comment out individual runs in sub-functions.
// runAllAdvancedExamples();

Exercises / Mini-Challenges

  1. Product Search with RedisJSON and Query Engine:

    • Store product information (ID, name, description, category, price, weight, vector_embedding (simulate with a dummy array)) as JSON documents in Redis.
    • Create a Query Engine index on these JSON documents, indexing name and description as TEXT, category as TAG, price and weight as NUMERIC.
    • Perform the following searches:
      • Full-text search for products containing “laptop” in their name or description.
      • Products in the “electronics” category costing between $500 and $1000.
      • Products that are “lightweight” (e.g., weight < 2).
    • Challenge: If you were to implement a “related products” feature using vector_embedding, how would you modify the index and what Query Engine command would you use? (Hint: VECTOR_RANGE or KNN in FT.SEARCH).
  2. IoT Sensor Data Analytics with RedisTimeSeries:

    • Simulate multiple IoT devices (e.g., device:1, device:2) sending temperature readings every few seconds.
    • Use TS.CREATE to create a TimeSeries for each device, with labels like location and type. Set a short RETENTION (e.g., 1 hour).
    • TS.ADD temperature data points to these time series.
    • Retrieve the average temperature for a specific device over the last 15 minutes, aggregated into 1-minute buckets.
    • Challenge: Retrieve the maximum temperature recorded across all devices in the “warehouse” location over the last 30 minutes, aggregated hourly. (Hint: TS.MRANGE with FILTER).
  3. Newsletter Subscription Management with RedisBloom:

    • Implement a system to check if an email address is already subscribed to your newsletter before adding it.
    • Use a Bloom Filter (BF.ADD, BF.EXISTS) for email:blacklist. If an email is in the blacklist, reject the subscription.
    • Use a separate Bloom Filter for subscribed:emails to quickly check for existing subscribers.
    • Challenge: After adding a new subscriber, if the Bloom Filter indicates they are “probably” subscribed, how would you confirm definitively (without a full database scan) to avoid false positives affecting critical business logic? (This highlights the trade-offs of probabilistic data structures).
  4. Local Store Finder with Geospatial Data:

    • Add several fictional store locations to Redis using GEOADD.
    • Implement a function that takes a user’s current longitude and latitude and returns a list of all stores within a 10km radius, sorted by distance.
    • Calculate the exact distance between two specific stores.
    • Challenge: Filter the results further by only showing stores that are “open now” (assume is_open is a field in a separate Redis Hash for each store, e.g., store:info:<store_name>). You’ll need to combine GEOSEARCH with client-side filtering or advanced server-side scripting (covered in advanced topics).

By engaging with these advanced topics and tackling their challenges, you’ll gain mastery over Redis’s expanded capabilities, positioning yourself to build cutting-edge applications in search, AI, real-time analytics, and more. Next, we move to High Availability and Clustering, crucial for deploying Redis in production at scale.