{% extends "base.html" %} {% block title %}Welcome - Nextcloud MCP Server{% endblock %} {% block extra_head %} {% endblock %} {% block extra_styles %} /* Welcome page specific styles */ .hero-section { background: linear-gradient(135deg, var(--color-primary-element) 0%, #0082c9 100%); color: white; padding: 60px 24px; margin: -24px -24px 40px -24px; border-radius: 0 0 var(--border-radius-large) var(--border-radius-large); text-align: center; } .hero-section h1 { color: white; font-size: 36px; margin: 0 0 16px 0; font-weight: 600; } .hero-section p { font-size: 18px; opacity: 0.95; max-width: 700px; margin: 0 auto; line-height: 1.6; } .feature-grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(280px, 1fr)); gap: 24px; margin: 32px 0; } .feature-card { background: var(--color-main-background); border: 2px solid var(--color-border); border-radius: var(--border-radius-large); padding: 24px; transition: all 0.2s; cursor: pointer; text-decoration: none; color: inherit; display: block; } .feature-card:hover { border-color: var(--color-primary-element); box-shadow: 0 4px 12px rgba(0, 103, 158, 0.15); transform: translateY(-2px); } .feature-card h3 { color: var(--color-primary-element); font-size: 20px; margin: 12px 0 8px 0; font-weight: 600; display: flex; align-items: center; gap: 12px; } .feature-card p { color: var(--color-text-maxcontrast); font-size: 14px; line-height: 1.6; margin: 8px 0 0 0; } .feature-icon { width: 48px; height: 48px; background: var(--color-primary-element-light); border-radius: var(--border-radius); display: flex; align-items: center; justify-content: center; margin-bottom: 8px; } .feature-icon svg { width: 28px; height: 28px; fill: var(--color-primary-element); } .info-section { background: var(--color-background-hover); border-radius: var(--border-radius-large); padding: 32px; margin: 32px 0; } .info-section h2 { color: var(--color-main-text); font-size: 24px; margin: 0 0 16px 0; border: none; padding: 0; } .info-section p { color: var(--color-text-maxcontrast); line-height: 1.7; margin: 12px 0; } .info-section ul { margin: 12px 0; padding-left: 24px; } .info-section li { color: var(--color-text-maxcontrast); line-height: 1.7; margin: 8px 0; } .info-section code { background: var(--color-main-background); padding: 2px 8px; border-radius: var(--border-radius); font-size: 13px; } .auth-status { background: var(--color-primary-element-light); border-left: 4px solid var(--color-primary-element); padding: 16px 20px; margin: 24px 0; border-radius: var(--border-radius); display: flex; align-items: center; gap: 12px; } .auth-status svg { width: 24px; height: 24px; fill: var(--color-primary-element); flex-shrink: 0; } .auth-status-text { flex: 1; } .auth-status-text strong { display: block; color: var(--color-main-text); font-size: 14px; margin-bottom: 4px; } .auth-status-text span { color: var(--color-text-maxcontrast); font-size: 13px; } {% endblock %} {% block content %}

Welcome to Nextcloud MCP Server

Interactive user interface for semantic search and document retrieval. Test queries, visualize results, and explore your Nextcloud content using RAG workflows.

Authenticated as: {{ username }} Authentication mode: {{ auth_mode }}
{% if vector_sync_enabled %}

About Semantic Search

This interface provides access to semantic search capabilities powered by vector embeddings. Unlike traditional keyword search, semantic search understands the meaning of your queries and finds conceptually similar content across your Nextcloud apps.

How it works:

  • Documents from Notes, Calendar, Files, Contacts, and Deck are indexed into a vector database
  • Each document chunk is converted to a 768-dimensional vector embedding that captures semantic meaning
  • Queries are also converted to embeddings and matched against document vectors using similarity search
  • Results can be retrieved using pure semantic search or hybrid BM25 search combining keywords and semantics

RAG Workflow Integration

This UI allows you to test the same queries that Large Language Models (LLMs) would use in a Retrieval-Augmented Generation (RAG) workflow. When an AI assistant needs to answer questions about your data:

  • Step 1: The assistant converts your question into a search query
  • Step 2: The MCP server retrieves relevant document chunks using semantic search
  • Step 3: Retrieved context is passed to the LLM to generate an informed answer
MCP Sampling RAG Workflow
┌─────────────────┐
│   MCP Client   │  User asks: "What are health benefits of coffee?"
│  (Claude Code)  │
└────────┬────────┘
         │ (1) User question
         ↓
┌────────────────────────────────────────────────────────────────────────┐
│                      Nextcloud MCP Server                          │
│  ┌──────────────────────────────────────────────────────────────────┐  │
│  │ nc_semantic_search_answer Tool (MCP Sampling-enabled)      │  │
│  │                                                                  │  │
│  │  (2) Semantic Search                                             │  │
│  │  ┌────────────────────────────────────────────────────────┐     │  │
│  │  │ Query: "health benefits of coffee"                     │     │  │
│  │  │ → Convert to 768D vector embedding                     │     │  │
│  │  │ → Search Qdrant (BM25 Hybrid + RRF fusion)             │     │  │
│  │  │ → Retrieve top 5 relevant document chunks              │     │  │
│  │  └────────────────────────────────────────────────────────┘     │  │
│  │                                                                  │  │
│  │  (3) Construct Prompt with Context                               │  │
│  │  ┌────────────────────────────────────────────────────────┐     │  │
│  │  │ "What are health benefits of coffee?                   │     │  │
│  │  │                                                         │     │  │
│  │  │  Documents:                                             │     │  │
│  │  │  - [MED-2155] Effects of habitual coffee consumption...│     │  │
│  │  │  - [MED-1646] Beverage consumption guidance...         │     │  │
│  │  │  - [MED-1627] Coffee and depression risk...            │     │  │
│  │  │  ...                                                    │     │  │
│  │  │                                                         │     │  │
│  │  │  Provide answer with citations."                        │     │  │
│  │  └────────────────────────────────────────────────────────┘     │  │
│  │                                                                  │  │
│  │  (4) MCP Sampling Request                                        │  │
│  │  ─────────────────────────────────────────────────────────────> │  │
│  └──────────────────────────────────────────────────────────────────┘  │
└────────────────────────────────────────────────────────────────────────┘
         │
         │ Sampling request with prompt + context
         ↓
┌─────────────────┐
│   MCP Client   │  (5) Client's LLM generates answer using retrieved context
│    (Claude)     │      → "Coffee consumption (2-3 cups/day) is associated with
└────────┬────────┘         reduced risk of type 2 diabetes, cardiovascular disease,
         │                  and improved liver health (Document 1, 2)..."
         │
         │ (6) Answer with citations
         ↓
┌─────────────────┐
│      User       │  Receives comprehensive answer with source citations
└─────────────────┘

Key Point: The MCP server retrieves context but doesn't generate answers itself. Through MCP sampling, it requests the client's LLM to generate responses, giving users full control over which model is used and ensuring all processing happens client-side.

By using this interface, you can preview search results, understand relevance scores, and verify that the system retrieves the right information before it reaches the LLM.

Available Features

{% else %}

Vector Sync is Disabled

Semantic search and vector visualization features are currently disabled. To enable these features, set VECTOR_SYNC_ENABLED=true in your environment configuration.

Learn more: Configuration Guide

Available Features

{% endif %}

Documentation

For detailed information about configuration, authentication modes, and advanced features, please refer to the project documentation:

{% endblock %}