Compare commits

...

41 Commits

Author SHA1 Message Date
Ruslan Bakiev
56a7734e8e Switch from Infisical to Vault for secret loading
Some checks failed
Build Docker Image / build (push) Failing after 2m52s
2026-03-09 11:20:34 +07:00
Ruslan Bakiev
e577b41a86 Fix ARANGODB_URL env var name
All checks were successful
Build Docker Image / build (push) Successful in 1m21s
2026-03-09 10:12:43 +07:00
Ruslan Bakiev
294f4077f0 Add Infisical secret loading at startup
All checks were successful
Build Docker Image / build (push) Successful in 2m4s
2026-03-09 10:00:33 +07:00
Ruslan Bakiev
52cbed91f8 Migrate geo backend from Django/Graphene to Express + Apollo Server + arangojs
All checks were successful
Build Docker Image / build (push) Successful in 1m5s
Replace Python stack with TypeScript. All 30+ GraphQL queries preserved including
phase-based routing (Dijkstra), H3 clustering, K_SHORTEST_PATHS, and external
routing services (GraphHopper, OpenRailRouting). Single public endpoint, no auth.
2026-03-09 09:45:49 +07:00
Ruslan Bakiev
31fc8cbc34 Move graph routing into route engine
All checks were successful
Build Docker Image / build (push) Successful in 1m56s
2026-02-07 11:41:12 +07:00
Ruslan Bakiev
4ec6506633 Unify graph routing analyzer
All checks were successful
Build Docker Image / build (push) Successful in 1m53s
2026-02-07 11:28:16 +07:00
Ruslan Bakiev
e99cbf4882 Unify hub offer queries and drop radius filters
All checks were successful
Build Docker Image / build (push) Successful in 2m7s
2026-02-07 11:05:52 +07:00
Ruslan Bakiev
3648366ebe feat(geo): graph-based hubs for product
All checks were successful
Build Docker Image / build (push) Successful in 1m52s
2026-02-07 10:14:18 +07:00
Ruslan Bakiev
eb73c5b1a1 feat(geo): filter clustered nodes by product/hub/supplier
All checks were successful
Build Docker Image / build (push) Successful in 2m42s
2026-02-07 08:27:54 +07:00
Ruslan Bakiev
f5f261ff89 Add quote calculations query
All checks were successful
Build Docker Image / build (push) Successful in 1m53s
2026-02-06 18:57:24 +07:00
Ruslan Bakiev
443dc7fa5d Fix nearest hubs fallback when source missing
All checks were successful
Build Docker Image / build (push) Successful in 1m32s
2026-02-05 20:10:11 +07:00
Ruslan Bakiev
09324bb25e Filter hubs to rail/sea and add graph-based nearest
All checks were successful
Build Docker Image / build (push) Successful in 3m9s
2026-02-05 18:41:07 +07:00
Ruslan Bakiev
387abf03e4 Remove cluster cache and query by bbox
All checks were successful
Build Docker Image / build (push) Successful in 3m0s
2026-02-05 10:26:19 +07:00
Ruslan Bakiev
9db56c5edc feat(schema): add bounds filtering to list endpoints
All checks were successful
Build Docker Image / build (push) Successful in 1m16s
Add west, south, east, north params to:
- hubs_list
- suppliers_list
- products_list

This enables filtering by map viewport bounds for the catalog.
2026-01-26 21:35:20 +07:00
Ruslan Bakiev
ca01a91019 Add integration tests for nearestOffers with hubUuid parameter
All checks were successful
Build Docker Image / build (push) Successful in 1m18s
2026-01-26 16:37:11 +07:00
Ruslan Bakiev
17081e13e4 Fix: resolve_offer_to_hub call in resolve_route_to_coordinate (same graphene self=None bug)
All checks were successful
Build Docker Image / build (push) Successful in 1m27s
2026-01-26 16:28:39 +07:00
Ruslan Bakiev
0c19135c49 Fix: resolve_offer_to_hub call in resolve_nearest_offers (self is None in graphene)
All checks were successful
Build Docker Image / build (push) Successful in 1m26s
2026-01-26 16:17:27 +07:00
Ruslan Bakiev
9ff7927463 Add hubsList, suppliersList, productsList resolvers and update nearestOffers
All checks were successful
Build Docker Image / build (push) Successful in 1m24s
- Add hubsList resolver for paginated hub list
- Add suppliersList resolver for paginated supplier list
- Add productsList resolver for paginated product list
- Update nearestOffers to support hubUuid parameter with route calculation
- Add OfferWithRouteType for offers with embedded routes
- Add supplier_name to OfferNodeType
2026-01-26 13:55:02 +07:00
Ruslan Bakiev
81f86b6538 Fix nodes_count query bind_vars - only add bounds when present
All checks were successful
Build Docker Image / build (push) Successful in 1m16s
2026-01-25 22:01:54 +07:00
Ruslan Bakiev
2e7f5e7863 Fix nodes query bind_vars - only add bounds when all coordinates present
All checks were successful
Build Docker Image / build (push) Successful in 1m16s
2026-01-25 21:54:22 +07:00
Ruslan Bakiev
64f7e4bdba Fix GraphQL types - add distance_km field
All checks were successful
Build Docker Image / build (push) Successful in 1m17s
- Add distance_km field to NodeType (used by nearestHubs)
- Add distance_km field to OfferNodeType (used by nearestOffers)
- Expand SupplierType with name, latitude, longitude, distance_km
- Fix nearestSuppliers to return full supplier info from nodes collection
- Fix nearestHubs and nearestOffers to pass distance_km to constructors

This fixes 8 failed integration tests for nearest* endpoints.

Resolves: Cannot query field 'distanceKm' on type 'NodeType/OfferNodeType'
2026-01-25 21:33:12 +07:00
Ruslan Bakiev
40f7f66f83 Add comprehensive tests for all geo GraphQL endpoints
All checks were successful
Build Docker Image / build (push) Successful in 1m22s
Created test suite covering all 8 main geo service endpoints:
- Basic: products, nodes (with filters/bounds), clusteredNodes
- Nearest: nearestHubs, nearestOffers, nearestSuppliers (with product filters)
- Routing: routeToCoordinate, autoRoute, railRoute
- Edge cases: invalid coordinates, zero radius, nonexistent UUIDs

Test suite uses real API calls to production GraphQL endpoint.
16 tests total across 4 test classes.

Files:
- tests/test_graphql_endpoints.py: Main test suite (600+ lines)
- tests/README.md: Documentation and usage guide
- pytest.ini: Pytest configuration
- run_tests.sh: Convenience script to run tests
- pyproject.toml: Added pytest and requests as dev dependencies
2026-01-25 21:12:59 +07:00
Ruslan Bakiev
56df2ab37b Fix OfferNodeType initialization error
All checks were successful
Build Docker Image / build (push) Successful in 1m19s
Remove supplier_name field from OfferNodeType constructor in resolve_nearest_offers - this field does not exist in the type definition and causes 400 errors.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-25 20:49:22 +07:00
Ruslan Bakiev
5363b113cf Merge geo-nearest-endpoints - add coordinate-based API endpoints
All checks were successful
Build Docker Image / build (push) Successful in 1m18s
2026-01-25 17:10:55 +07:00
Ruslan Bakiev
46c87c7caa Add coordinate-based nearest endpoints to geo API
- Add nearestHubs(lat, lon, radius, productUuid?) - hubs near coordinates
- Add nearestOffers(lat, lon, radius, productUuid?) - offers near coordinates
- Add nearestSuppliers(lat, lon, radius, productUuid?) - suppliers near coordinates
- Add routeToCoordinate(offerUuid, lat, lon) - route from offer to coordinates

These unified endpoints work with coordinates instead of UUIDs, simplifying
the frontend logic by removing the need for entity-specific queries like
GetProductsNearHub, GetHubsNearOffer, etc.
2026-01-25 17:10:32 +07:00
Ruslan Bakiev
27b05cf362 Add bounds filtering to nodes query for map-based selection
All checks were successful
Build Docker Image / build (push) Successful in 1m10s
2026-01-24 12:13:29 +07:00
Ruslan Bakiev
e342d68197 Add suppliersForProduct and hubsForProduct queries for cascading filters
All checks were successful
Build Docker Image / build (push) Successful in 1m14s
2026-01-24 11:54:24 +07:00
Ruslan Bakiev
3a24f4a9cd Fix supplier query - aggregate through offers
All checks were successful
Build Docker Image / build (push) Successful in 1m21s
Only show suppliers that have active offers.
2026-01-23 11:23:54 +07:00
Ruslan Bakiev
0106c84daf Add offers_count field to ProductType
All checks were successful
Build Docker Image / build (push) Successful in 1m16s
2026-01-22 17:22:05 +07:00
Ruslan Bakiev
596bdbf1c5 Add node_type parameter to clusteredNodes for unified server-side clustering
All checks were successful
Build Docker Image / build (push) Successful in 1m31s
2026-01-16 17:29:42 +07:00
Ruslan Bakiev
07f89ba5fb refactor(geo): Clean up queries - rename offers_to_hub to offers_by_hub, add offer_to_hub
All checks were successful
Build Docker Image / build (push) Successful in 1m24s
- Remove find_routes, find_product_routes, delivery_to_hub queries
- Rename offers_to_hub → offers_by_hub with proper phase-based routing (auto → rail* → auto)
- Add offer_to_hub query for single offer to hub connection
- Both new queries use Dijkstra-like search with transport phases
2026-01-16 16:54:00 +07:00
Ruslan Bakiev
5112f52722 Fix _build_routes call in deliveryToHub
All checks were successful
Build Docker Image / build (push) Successful in 1m18s
2026-01-16 16:03:26 +07:00
Ruslan Bakiev
339db65514 Fix resolve_offers_to_hub to use DISTANCE() instead of graph traversal
All checks were successful
Build Docker Image / build (push) Successful in 1m23s
2026-01-16 15:56:25 +07:00
Ruslan Bakiev
b6f9b2d70b Replace graph traversal queries with DISTANCE() queries
All checks were successful
Build Docker Image / build (push) Successful in 1m53s
- Add new resolvers: products, offersByProduct, hubsNearOffer, suppliers,
  productsBySupplier, offersBySupplierProduct, productsNearHub, offersToHub, deliveryToHub
- Remove broken queries that caused OOM on 234k edges
- Use DISTANCE() for geographic proximity instead of graph traversal
2026-01-16 15:39:55 +07:00
Ruslan Bakiev
a3b0b5ff79 Use supplier_uuid instead of team_uuid in findSupplierProductHubs
All checks were successful
Build Docker Image / build (push) Successful in 1m27s
2026-01-16 10:50:16 +07:00
Ruslan Bakiev
6084333704 Fix findSupplierProductHubs: use team_uuid instead of supplier_uuid
All checks were successful
Build Docker Image / build (push) Successful in 1m39s
2026-01-16 10:35:17 +07:00
Ruslan Bakiev
8f1e3be129 Trigger deploy for catalog navigation queries
All checks were successful
Build Docker Image / build (push) Successful in 1m26s
2026-01-16 10:13:40 +07:00
Ruslan Bakiev
b510dd54d6 feat: add catalog navigation queries
All checks were successful
Build Docker Image / build (push) Successful in 1m34s
- findProductsForHub: find products deliverable to a hub
- findHubsForProduct: find hubs where product can be delivered
- findSupplierProductHubs: find hubs for supplier's product
- findOffersForHubByProduct: find offers with routes (wrapper for findProductRoutes)
2026-01-16 01:32:55 +07:00
Ruslan Bakiev
fd7e10c193 Filter offer edges from route stages
All checks were successful
Build Docker Image / build (push) Successful in 1m42s
Offer edges connect offer nodes to locations and are not
transport stages. Filter them out in _build_route_from_edges()
to avoid showing 0km "offer" steps in the route stepper.
2026-01-15 00:32:57 +07:00
Ruslan Bakiev
0330203a58 Replace pysupercluster with h3 for clustering
All checks were successful
Build Docker Image / build (push) Successful in 1m38s
2026-01-14 10:24:40 +07:00
Ruslan Bakiev
7efa753092 Add server-side clustering with pysupercluster
Some checks failed
Build Docker Image / build (push) Failing after 2m14s
2026-01-14 10:12:39 +07:00
25 changed files with 5469 additions and 1252 deletions

2
.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
node_modules
dist

View File

@@ -1,24 +1,26 @@
FROM python:3.12-slim
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
NIXPACKS_POETRY_VERSION=2.2.1
FROM node:22-alpine AS builder
WORKDIR /app
RUN apt-get update \
&& apt-get install -y --no-install-recommends build-essential curl \
&& rm -rf /var/lib/apt/lists/*
COPY package.json ./
RUN npm install
RUN python -m venv --copies /opt/venv
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="/opt/venv/bin:$PATH"
COPY tsconfig.json ./
COPY src ./src
RUN npm run build
COPY . .
FROM node:22-alpine
RUN pip install --no-cache-dir poetry==$NIXPACKS_POETRY_VERSION \
&& poetry install --no-interaction --no-ansi
RUN apk add --no-cache curl jq
ENV PORT=8000
WORKDIR /app
CMD ["sh", "-c", "poetry run python manage.py collectstatic --noinput && poetry run python -m gunicorn geo.wsgi:application --bind 0.0.0.0:${PORT:-8000}"]
COPY package.json ./
RUN npm install --omit=dev
COPY --from=builder /app/dist ./dist
COPY scripts ./scripts
EXPOSE 8000
CMD ["sh", "-c", ". /app/scripts/load-vault-env.sh && node dist/index.js"]

View File

@@ -1 +0,0 @@
"""Geo Django project."""

View File

@@ -1,148 +0,0 @@
import os
from pathlib import Path
from dotenv import load_dotenv
from infisical_sdk import InfisicalSDKClient
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
load_dotenv()
INFISICAL_API_URL = os.environ["INFISICAL_API_URL"]
INFISICAL_CLIENT_ID = os.environ["INFISICAL_CLIENT_ID"]
INFISICAL_CLIENT_SECRET = os.environ["INFISICAL_CLIENT_SECRET"]
INFISICAL_PROJECT_ID = os.environ["INFISICAL_PROJECT_ID"]
INFISICAL_ENV = os.environ.get("INFISICAL_ENV", "prod")
client = InfisicalSDKClient(host=INFISICAL_API_URL)
client.auth.universal_auth.login(
client_id=INFISICAL_CLIENT_ID,
client_secret=INFISICAL_CLIENT_SECRET,
)
# Fetch secrets from /geo and /shared
for secret_path in ["/geo", "/shared"]:
secrets_response = client.secrets.list_secrets(
environment_slug=INFISICAL_ENV,
secret_path=secret_path,
project_id=INFISICAL_PROJECT_ID,
expand_secret_references=True,
view_secret_value=True,
)
for secret in secrets_response.secrets:
os.environ[secret.secretKey] = secret.secretValue
BASE_DIR = Path(__file__).resolve().parent.parent
SECRET_KEY = os.getenv('DJANGO_SECRET_KEY', 'dev-secret-key-change-in-production')
DEBUG = os.getenv('DEBUG', 'False') == 'True'
# Sentry/GlitchTip configuration
SENTRY_DSN = os.getenv('SENTRY_DSN', '')
if SENTRY_DSN:
sentry_sdk.init(
dsn=SENTRY_DSN,
integrations=[DjangoIntegration()],
auto_session_tracking=False,
traces_sample_rate=0.01,
release=os.getenv('RELEASE_VERSION', '1.0.0'),
environment=os.getenv('ENVIRONMENT', 'production'),
send_default_pii=False,
debug=DEBUG,
)
ALLOWED_HOSTS = ['*']
CSRF_TRUSTED_ORIGINS = ['https://geo.optovia.ru']
INSTALLED_APPS = [
'whitenoise.runserver_nostatic',
'django.contrib.contenttypes',
'django.contrib.staticfiles',
'corsheaders',
'graphene_django',
'geo_app',
]
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.middleware.common.CommonMiddleware',
]
ROOT_URLCONF = 'geo.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
],
},
},
]
WSGI_APPLICATION = 'geo.wsgi.application'
# No database - we use ArangoDB directly
DATABASES = {}
# Internationalization
LANGUAGE_CODE = 'ru-ru'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
# Static files
STATIC_URL = '/static/'
STATIC_ROOT = BASE_DIR / 'staticfiles'
# Default primary key field type
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
# CORS
CORS_ALLOW_ALL_ORIGINS = False
CORS_ALLOWED_ORIGINS = ['https://optovia.ru']
CORS_ALLOW_CREDENTIALS = True
# GraphQL
GRAPHENE = {
'SCHEMA': 'geo_app.schema.schema',
}
# ArangoDB connection (internal M2M)
ARANGODB_INTERNAL_URL = os.getenv('ARANGODB_INTERNAL_URL', 'localhost:8529')
ARANGODB_DATABASE = os.getenv('ARANGODB_DATABASE', 'optovia_maps')
ARANGODB_PASSWORD = os.getenv('ARANGODB_PASSWORD', '')
# Routing services (external APIs)
GRAPHHOPPER_EXTERNAL_URL = os.getenv('GRAPHHOPPER_EXTERNAL_URL', 'https://graphhopper.optovia.ru')
OPENRAILROUTING_EXTERNAL_URL = os.getenv('OPENRAILROUTING_EXTERNAL_URL', 'https://openrailrouting.optovia.ru')
# Logging
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
},
'loggers': {
'django.request': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'geo_app': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
},
}

View File

@@ -1,125 +0,0 @@
import os
from pathlib import Path
from dotenv import load_dotenv
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
load_dotenv()
BASE_DIR = Path(__file__).resolve().parent.parent
SECRET_KEY = os.getenv('DJANGO_SECRET_KEY', 'dev-secret-key-change-in-production')
DEBUG = True
# Sentry/GlitchTip configuration
SENTRY_DSN = os.getenv('SENTRY_DSN', '')
if SENTRY_DSN:
sentry_sdk.init(
dsn=SENTRY_DSN,
integrations=[DjangoIntegration()],
auto_session_tracking=False,
traces_sample_rate=0.01,
release=os.getenv('RELEASE_VERSION', '1.0.0'),
environment=os.getenv('ENVIRONMENT', 'production'),
send_default_pii=False,
debug=DEBUG,
)
ALLOWED_HOSTS = ['*']
CSRF_TRUSTED_ORIGINS = ['https://geo.optovia.ru']
INSTALLED_APPS = [
'whitenoise.runserver_nostatic',
'django.contrib.contenttypes',
'django.contrib.staticfiles',
'corsheaders',
'graphene_django',
'geo_app',
]
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.middleware.common.CommonMiddleware',
]
ROOT_URLCONF = 'geo.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
],
},
},
]
WSGI_APPLICATION = 'geo.wsgi.application'
# No database - we use ArangoDB directly
DATABASES = {}
# Internationalization
LANGUAGE_CODE = 'ru-ru'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
# Static files
STATIC_URL = '/static/'
STATIC_ROOT = BASE_DIR / 'staticfiles'
# Default primary key field type
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
# CORS
CORS_ALLOW_ALL_ORIGINS = False
CORS_ALLOWED_ORIGINS = ['http://localhost:3000', 'https://optovia.ru']
CORS_ALLOW_CREDENTIALS = True
# GraphQL
GRAPHENE = {
'SCHEMA': 'geo_app.schema.schema',
}
# ArangoDB connection (internal M2M)
ARANGODB_INTERNAL_URL = os.getenv('ARANGODB_INTERNAL_URL', 'localhost:8529')
ARANGODB_DATABASE = os.getenv('ARANGODB_DATABASE', 'optovia_maps')
ARANGODB_PASSWORD = os.getenv('ARANGODB_PASSWORD', '')
# Routing services (external APIs)
GRAPHHOPPER_EXTERNAL_URL = os.getenv('GRAPHHOPPER_EXTERNAL_URL', 'https://graphhopper.optovia.ru')
OPENRAILROUTING_EXTERNAL_URL = os.getenv('OPENRAILROUTING_EXTERNAL_URL', 'https://openrailrouting.optovia.ru')
# Logging
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
},
'loggers': {
'django.request': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'geo_app': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
},
}

View File

@@ -1,7 +0,0 @@
from django.urls import path
from graphene_django.views import GraphQLView
from django.views.decorators.csrf import csrf_exempt
urlpatterns = [
path('graphql/public/', csrf_exempt(GraphQLView.as_view(graphiql=True))),
]

View File

@@ -1,5 +0,0 @@
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'geo.settings')
application = get_wsgi_application()

View File

@@ -1 +0,0 @@
"""Geo app - logistics graph operations."""

View File

@@ -1,6 +0,0 @@
from django.apps import AppConfig
class GeoAppConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'geo_app'

View File

@@ -1,49 +0,0 @@
"""ArangoDB client singleton."""
import logging
from arango import ArangoClient
from django.conf import settings
logger = logging.getLogger(__name__)
_db = None
def get_db():
"""Get ArangoDB database connection (singleton)."""
global _db
if _db is None:
hosts = settings.ARANGODB_INTERNAL_URL
if not hosts.startswith("http"):
hosts = f"http://{hosts}"
client = ArangoClient(hosts=hosts)
_db = client.db(
settings.ARANGODB_DATABASE,
username='root',
password=settings.ARANGODB_PASSWORD,
)
logger.info(
"Connected to ArangoDB: %s/%s",
hosts,
settings.ARANGODB_DATABASE,
)
return _db
def ensure_graph():
"""Ensure named graph exists for K_SHORTEST_PATHS queries."""
db = get_db()
graph_name = 'optovia_graph'
if db.has_graph(graph_name):
return db.graph(graph_name)
logger.info("Creating graph: %s", graph_name)
return db.create_graph(
graph_name,
edge_definitions=[{
'edge_collection': 'edges',
'from_vertex_collections': ['nodes'],
'to_vertex_collections': ['nodes'],
}],
)

200
geo_app/route_engine.py Normal file
View File

@@ -0,0 +1,200 @@
"""Unified graph routing helpers."""
import heapq
from .arango_client import ensure_graph
def _allowed_next_phase(current_phase, transport_type):
"""
Phase-based routing: auto → rail* → auto.
- end_auto: allow one auto, rail, or offer
- end_auto_done: auto used — rail or offer
- rail: any number of rail, then one auto or offer
- start_auto_done: auto used — only offer
"""
if current_phase == 'end_auto':
if transport_type == 'offer':
return 'offer'
if transport_type == 'auto':
return 'end_auto_done'
if transport_type == 'rail':
return 'rail'
return None
if current_phase == 'end_auto_done':
if transport_type == 'offer':
return 'offer'
if transport_type == 'rail':
return 'rail'
return None
if current_phase == 'rail':
if transport_type == 'offer':
return 'offer'
if transport_type == 'rail':
return 'rail'
if transport_type == 'auto':
return 'start_auto_done'
return None
if current_phase == 'start_auto_done':
if transport_type == 'offer':
return 'offer'
return None
return None
def _allowed_types_for_phase(phase):
if phase == 'end_auto':
return ['auto', 'rail', 'offer']
if phase == 'end_auto_done':
return ['rail', 'offer']
if phase == 'rail':
return ['rail', 'auto', 'offer']
if phase == 'start_auto_done':
return ['offer']
return ['offer']
def _fetch_neighbors(db, node_key, allowed_types):
aql = """
FOR edge IN edges
FILTER edge.transport_type IN @types
FILTER edge._from == @node_id OR edge._to == @node_id
LET neighbor_id = edge._from == @node_id ? edge._to : edge._from
LET neighbor = DOCUMENT(neighbor_id)
FILTER neighbor != null
RETURN {
neighbor_key: neighbor._key,
neighbor_doc: neighbor,
from_id: edge._from,
to_id: edge._to,
transport_type: edge.transport_type,
distance_km: edge.distance_km,
travel_time_seconds: edge.travel_time_seconds
}
"""
cursor = db.aql.execute(
aql,
bind_vars={'node_id': f"nodes/{node_key}", 'types': allowed_types},
)
return list(cursor)
def graph_find_targets(db, start_uuid, target_predicate, route_builder, limit=10, max_expansions=20000):
"""Unified graph traversal: auto → rail* → auto, returns routes for target nodes."""
ensure_graph()
nodes_col = db.collection('nodes')
start = nodes_col.get(start_uuid)
if not start:
return []
queue = []
counter = 0
heapq.heappush(queue, (0, counter, start_uuid, 'end_auto'))
visited = {}
predecessors = {}
node_docs = {start_uuid: start}
found = []
expansions = 0
while queue and len(found) < limit and expansions < max_expansions:
cost, _, node_key, phase = heapq.heappop(queue)
if (node_key, phase) in visited and cost > visited[(node_key, phase)]:
continue
visited[(node_key, phase)] = cost
node_doc = node_docs.get(node_key)
if node_doc and target_predicate(node_doc):
path_edges = []
state = (node_key, phase)
current_key = node_key
while state in predecessors:
prev_state, edge_info = predecessors[state]
prev_key = prev_state[0]
path_edges.append((current_key, prev_key, edge_info))
state = prev_state
current_key = prev_key
route = route_builder(path_edges, node_docs) if route_builder else None
distance_km = route.total_distance_km if route else None
found.append({
'node': node_doc,
'route': route,
'distance_km': distance_km,
'cost': cost,
})
continue
neighbors = _fetch_neighbors(db, node_key, _allowed_types_for_phase(phase))
expansions += 1
for neighbor in neighbors:
transport_type = neighbor.get('transport_type')
next_phase = _allowed_next_phase(phase, transport_type)
if next_phase is None:
continue
travel_time = neighbor.get('travel_time_seconds')
distance_km = neighbor.get('distance_km')
neighbor_key = neighbor.get('neighbor_key')
if not neighbor_key:
continue
node_docs[neighbor_key] = neighbor.get('neighbor_doc')
step_cost = travel_time if travel_time is not None else (distance_km or 0)
new_cost = cost + step_cost
state_key = (neighbor_key, next_phase)
if state_key in visited and new_cost >= visited[state_key]:
continue
counter += 1
heapq.heappush(queue, (new_cost, counter, neighbor_key, next_phase))
predecessors[state_key] = ((node_key, phase), neighbor)
return found
def snap_to_nearest_hub(db, lat, lon):
aql = """
FOR hub IN nodes
FILTER hub.node_type == 'logistics' OR hub.node_type == null
FILTER hub.product_uuid == null
LET types = hub.transport_types != null ? hub.transport_types : []
FILTER ('rail' IN types) OR ('sea' IN types)
FILTER hub.latitude != null AND hub.longitude != null
LET dist = DISTANCE(hub.latitude, hub.longitude, @lat, @lon) / 1000
SORT dist ASC
LIMIT 1
RETURN hub
"""
cursor = db.aql.execute(aql, bind_vars={'lat': lat, 'lon': lon})
hubs = list(cursor)
return hubs[0] if hubs else None
def resolve_start_hub(db, source_uuid=None, lat=None, lon=None):
nodes_col = db.collection('nodes')
if source_uuid:
node = nodes_col.get(source_uuid)
if not node:
return None
if node.get('node_type') in ('logistics', None):
types = node.get('transport_types') or []
if ('rail' in types) or ('sea' in types):
return node
node_lat = node.get('latitude')
node_lon = node.get('longitude')
if node_lat is None or node_lon is None:
return None
return snap_to_nearest_hub(db, node_lat, node_lon)
if lat is None or lon is None:
return None
return snap_to_nearest_hub(db, lat, lon)

View File

@@ -1,833 +0,0 @@
"""GraphQL schema for Geo service."""
import logging
import heapq
import math
import requests
import graphene
from django.conf import settings
from .arango_client import get_db, ensure_graph
logger = logging.getLogger(__name__)
class EdgeType(graphene.ObjectType):
"""Edge between two nodes (route)."""
to_uuid = graphene.String()
to_name = graphene.String()
to_latitude = graphene.Float()
to_longitude = graphene.Float()
distance_km = graphene.Float()
travel_time_seconds = graphene.Int()
transport_type = graphene.String()
class NodeType(graphene.ObjectType):
"""Logistics node with edges to neighbors."""
uuid = graphene.String()
name = graphene.String()
latitude = graphene.Float()
longitude = graphene.Float()
country = graphene.String()
country_code = graphene.String()
synced_at = graphene.String()
transport_types = graphene.List(graphene.String)
edges = graphene.List(EdgeType)
class NodeConnectionsType(graphene.ObjectType):
"""Auto + rail edges for a node, rail uses nearest rail node."""
hub = graphene.Field(NodeType)
rail_node = graphene.Field(NodeType)
auto_edges = graphene.List(EdgeType)
rail_edges = graphene.List(EdgeType)
class RouteType(graphene.ObjectType):
"""Route between two points with geometry."""
distance_km = graphene.Float()
geometry = graphene.JSONString(description="GeoJSON LineString coordinates")
class RouteStageType(graphene.ObjectType):
"""Single stage in a multi-hop route."""
from_uuid = graphene.String()
from_name = graphene.String()
from_lat = graphene.Float()
from_lon = graphene.Float()
to_uuid = graphene.String()
to_name = graphene.String()
to_lat = graphene.Float()
to_lon = graphene.Float()
distance_km = graphene.Float()
travel_time_seconds = graphene.Int()
transport_type = graphene.String()
class RoutePathType(graphene.ObjectType):
"""Complete route through graph with multiple stages."""
total_distance_km = graphene.Float()
total_time_seconds = graphene.Int()
stages = graphene.List(RouteStageType)
class ProductRouteOptionType(graphene.ObjectType):
"""Route options for a product source to the destination."""
source_uuid = graphene.String()
source_name = graphene.String()
source_lat = graphene.Float()
source_lon = graphene.Float()
distance_km = graphene.Float()
routes = graphene.List(RoutePathType)
class Query(graphene.ObjectType):
"""Root query."""
MAX_EXPANSIONS = 20000
node = graphene.Field(
NodeType,
uuid=graphene.String(required=True),
description="Get node by UUID with all edges to neighbors",
)
nodes = graphene.List(
NodeType,
description="Get all nodes (without edges for performance)",
limit=graphene.Int(),
offset=graphene.Int(),
transport_type=graphene.String(),
country=graphene.String(description="Filter by country name"),
search=graphene.String(description="Search by node name (case-insensitive)"),
)
nodes_count = graphene.Int(
transport_type=graphene.String(),
country=graphene.String(description="Filter by country name"),
description="Get total count of nodes (with optional transport/country filter)",
)
hub_countries = graphene.List(
graphene.String,
description="List of countries that have logistics hubs",
)
nearest_nodes = graphene.List(
NodeType,
lat=graphene.Float(required=True, description="Latitude"),
lon=graphene.Float(required=True, description="Longitude"),
limit=graphene.Int(default_value=5, description="Max results"),
description="Find nearest logistics nodes to given coordinates",
)
node_connections = graphene.Field(
NodeConnectionsType,
uuid=graphene.String(required=True),
limit_auto=graphene.Int(default_value=12),
limit_rail=graphene.Int(default_value=12),
description="Get auto + rail edges for a node (rail uses nearest rail node)",
)
auto_route = graphene.Field(
RouteType,
from_lat=graphene.Float(required=True),
from_lon=graphene.Float(required=True),
to_lat=graphene.Float(required=True),
to_lon=graphene.Float(required=True),
description="Get auto route between two points via GraphHopper",
)
rail_route = graphene.Field(
RouteType,
from_lat=graphene.Float(required=True),
from_lon=graphene.Float(required=True),
to_lat=graphene.Float(required=True),
to_lon=graphene.Float(required=True),
description="Get rail route between two points via OpenRailRouting",
)
find_routes = graphene.List(
RoutePathType,
from_uuid=graphene.String(required=True),
to_uuid=graphene.String(required=True),
limit=graphene.Int(default_value=3),
description="Find K shortest routes through graph between two nodes",
)
find_product_routes = graphene.List(
ProductRouteOptionType,
product_uuid=graphene.String(required=True),
to_uuid=graphene.String(required=True),
limit_sources=graphene.Int(default_value=3),
limit_routes=graphene.Int(default_value=3),
description="Find routes from product offer nodes to destination",
)
@staticmethod
def _build_routes(db, from_uuid, to_uuid, limit):
"""Shared helper to compute K shortest routes between two nodes."""
aql = """
FOR path IN ANY K_SHORTEST_PATHS
@from_vertex TO @to_vertex
GRAPH 'optovia_graph'
OPTIONS { weightAttribute: 'distance_km' }
LIMIT @limit
RETURN {
vertices: path.vertices,
edges: path.edges,
weight: path.weight
}
"""
try:
cursor = db.aql.execute(
aql,
bind_vars={
'from_vertex': f'nodes/{from_uuid}',
'to_vertex': f'nodes/{to_uuid}',
'limit': limit,
},
)
paths = list(cursor)
except Exception as e:
logger.error("K_SHORTEST_PATHS query failed: %s", e)
return []
if not paths:
logger.info("No paths found from %s to %s", from_uuid, to_uuid)
return []
routes = []
for path in paths:
vertices = path.get('vertices', [])
edges = path.get('edges', [])
# Build vertex lookup by _id
vertex_by_id = {v['_id']: v for v in vertices}
stages = []
for edge in edges:
from_node = vertex_by_id.get(edge['_from'], {})
to_node = vertex_by_id.get(edge['_to'], {})
stages.append(RouteStageType(
from_uuid=from_node.get('_key'),
from_name=from_node.get('name'),
from_lat=from_node.get('latitude'),
from_lon=from_node.get('longitude'),
to_uuid=to_node.get('_key'),
to_name=to_node.get('name'),
to_lat=to_node.get('latitude'),
to_lon=to_node.get('longitude'),
distance_km=edge.get('distance_km'),
travel_time_seconds=edge.get('travel_time_seconds'),
transport_type=edge.get('transport_type'),
))
total_time = sum(s.travel_time_seconds or 0 for s in stages)
routes.append(RoutePathType(
total_distance_km=path.get('weight'),
total_time_seconds=total_time,
stages=stages,
))
return routes
def resolve_node(self, info, uuid):
"""
Get a single node with all its outgoing edges.
Returns node info + list of edges to neighbors with distances.
"""
db = get_db()
# Get node
nodes_col = db.collection('nodes')
node = nodes_col.get(uuid)
if not node:
return None
# Get all outgoing edges from this node
edges_col = db.collection('edges')
aql = """
FOR edge IN edges
FILTER edge._from == @from_id
LET to_node = DOCUMENT(edge._to)
RETURN {
to_uuid: to_node._key,
to_name: to_node.name,
to_latitude: to_node.latitude,
to_longitude: to_node.longitude,
distance_km: edge.distance_km,
travel_time_seconds: edge.travel_time_seconds,
transport_type: edge.transport_type
}
"""
cursor = db.aql.execute(aql, bind_vars={'from_id': f"nodes/{uuid}"})
edges = list(cursor)
logger.info("Node %s has %d edges", uuid, len(edges))
return NodeType(
uuid=node['_key'],
name=node.get('name'),
latitude=node.get('latitude'),
longitude=node.get('longitude'),
country=node.get('country'),
country_code=node.get('country_code'),
synced_at=node.get('synced_at'),
transport_types=node.get('transport_types') or [],
edges=[EdgeType(**e) for e in edges],
)
def resolve_nodes(self, info, limit=None, offset=None, transport_type=None, country=None, search=None):
"""Get all logistics nodes (without edges for list view)."""
db = get_db()
# Only return logistics nodes (not buyer/seller addresses)
aql = """
FOR node IN nodes
FILTER node.node_type == 'logistics' OR node.node_type == null
LET types = node.transport_types != null ? node.transport_types : []
FILTER @transport_type == null OR @transport_type IN types
FILTER @country == null OR node.country == @country
FILTER @search == null OR CONTAINS(LOWER(node.name), LOWER(@search)) OR CONTAINS(LOWER(node.country), LOWER(@search))
SORT node.name ASC
LIMIT @offset, @limit
RETURN node
"""
cursor = db.aql.execute(
aql,
bind_vars={
'transport_type': transport_type,
'country': country,
'search': search,
'offset': 0 if offset is None else offset,
'limit': 1000000 if limit is None else limit,
},
)
nodes = []
for node in cursor:
nodes.append(NodeType(
uuid=node['_key'],
name=node.get('name'),
latitude=node.get('latitude'),
longitude=node.get('longitude'),
country=node.get('country'),
country_code=node.get('country_code'),
synced_at=node.get('synced_at'),
transport_types=node.get('transport_types') or [],
edges=[], # Don't load edges for list
))
logger.info("Returning %d nodes", len(nodes))
return nodes
def resolve_nodes_count(self, info, transport_type=None, country=None):
db = get_db()
aql = """
FOR node IN nodes
FILTER node.node_type == 'logistics' OR node.node_type == null
LET types = node.transport_types != null ? node.transport_types : []
FILTER @transport_type == null OR @transport_type IN types
FILTER @country == null OR node.country == @country
COLLECT WITH COUNT INTO length
RETURN length
"""
cursor = db.aql.execute(aql, bind_vars={'transport_type': transport_type, 'country': country})
return next(cursor, 0)
def resolve_hub_countries(self, info):
"""Get unique country names from logistics hubs."""
db = get_db()
aql = """
FOR node IN nodes
FILTER node.node_type == 'logistics' OR node.node_type == null
FILTER node.country != null
COLLECT country = node.country
SORT country ASC
RETURN country
"""
cursor = db.aql.execute(aql)
return list(cursor)
def resolve_nearest_nodes(self, info, lat, lon, limit=5):
"""Find nearest logistics nodes to given coordinates."""
db = get_db()
# Get all logistics nodes and calculate distance
aql = """
FOR node IN nodes
FILTER node.node_type == 'logistics' OR node.node_type == null
FILTER node.latitude != null AND node.longitude != null
LET dist = DISTANCE(node.latitude, node.longitude, @lat, @lon) / 1000
SORT dist ASC
LIMIT @limit
RETURN MERGE(node, {distance_km: dist})
"""
cursor = db.aql.execute(
aql,
bind_vars={'lat': lat, 'lon': lon, 'limit': limit},
)
nodes = []
for node in cursor:
nodes.append(NodeType(
uuid=node['_key'],
name=node.get('name'),
latitude=node.get('latitude'),
longitude=node.get('longitude'),
country=node.get('country'),
country_code=node.get('country_code'),
synced_at=node.get('synced_at'),
transport_types=node.get('transport_types') or [],
edges=[],
))
return nodes
def resolve_node_connections(self, info, uuid, limit_auto=12, limit_rail=12):
"""Get auto edges from hub and rail edges from nearest rail node."""
db = get_db()
nodes_col = db.collection('nodes')
hub = nodes_col.get(uuid)
if not hub:
return None
aql = """
LET auto_edges = (
FOR edge IN edges
FILTER edge._from == @from_id AND edge.transport_type == "auto"
LET to_node = DOCUMENT(edge._to)
FILTER to_node != null
SORT edge.distance_km ASC
LIMIT @limit_auto
RETURN {
to_uuid: to_node._key,
to_name: to_node.name,
to_latitude: to_node.latitude,
to_longitude: to_node.longitude,
distance_km: edge.distance_km,
travel_time_seconds: edge.travel_time_seconds,
transport_type: edge.transport_type
}
)
LET hub_has_rail = @hub_has_rail
LET rail_node = hub_has_rail ? DOCUMENT(@from_id) : FIRST(
FOR node IN nodes
FILTER node.latitude != null AND node.longitude != null
FILTER 'rail' IN node.transport_types
SORT DISTANCE(@hub_lat, @hub_lon, node.latitude, node.longitude)
LIMIT 1
RETURN node
)
LET rail_edges = rail_node == null ? [] : (
FOR edge IN edges
FILTER edge._from == CONCAT("nodes/", rail_node._key) AND edge.transport_type == "rail"
LET to_node = DOCUMENT(edge._to)
FILTER to_node != null
SORT edge.distance_km ASC
LIMIT @limit_rail
RETURN {
to_uuid: to_node._key,
to_name: to_node.name,
to_latitude: to_node.latitude,
to_longitude: to_node.longitude,
distance_km: edge.distance_km,
travel_time_seconds: edge.travel_time_seconds,
transport_type: edge.transport_type
}
)
RETURN {
hub: DOCUMENT(@from_id),
rail_node: rail_node,
auto_edges: auto_edges,
rail_edges: rail_edges
}
"""
cursor = db.aql.execute(
aql,
bind_vars={
'from_id': f"nodes/{uuid}",
'hub_lat': hub.get('latitude'),
'hub_lon': hub.get('longitude'),
'hub_has_rail': 'rail' in (hub.get('transport_types') or []),
'limit_auto': limit_auto,
'limit_rail': limit_rail,
},
)
result = next(cursor, None)
if not result:
return None
def build_node(doc):
if not doc:
return None
return NodeType(
uuid=doc['_key'],
name=doc.get('name'),
latitude=doc.get('latitude'),
longitude=doc.get('longitude'),
country=doc.get('country'),
country_code=doc.get('country_code'),
synced_at=doc.get('synced_at'),
transport_types=doc.get('transport_types') or [],
edges=[],
)
return NodeConnectionsType(
hub=build_node(result.get('hub')),
rail_node=build_node(result.get('rail_node')),
auto_edges=[EdgeType(**e) for e in result.get('auto_edges') or []],
rail_edges=[EdgeType(**e) for e in result.get('rail_edges') or []],
)
def resolve_auto_route(self, info, from_lat, from_lon, to_lat, to_lon):
"""Get auto route via GraphHopper."""
url = f"{settings.GRAPHHOPPER_EXTERNAL_URL}/route"
params = {
'point': [f"{from_lat},{from_lon}", f"{to_lat},{to_lon}"],
'profile': 'car',
'instructions': 'false',
'calc_points': 'true',
'points_encoded': 'false',
}
try:
response = requests.get(url, params=params, timeout=30)
response.raise_for_status()
data = response.json()
if 'paths' in data and len(data['paths']) > 0:
path = data['paths'][0]
distance_km = round(path.get('distance', 0) / 1000, 2)
points = path.get('points', {})
coordinates = points.get('coordinates', [])
return RouteType(
distance_km=distance_km,
geometry=coordinates,
)
except requests.RequestException as e:
logger.error("GraphHopper request failed: %s", e)
return None
def resolve_rail_route(self, info, from_lat, from_lon, to_lat, to_lon):
"""Get rail route via OpenRailRouting."""
url = f"{settings.OPENRAILROUTING_EXTERNAL_URL}/route"
params = {
'point': [f"{from_lat},{from_lon}", f"{to_lat},{to_lon}"],
'profile': 'all_tracks',
'calc_points': 'true',
'points_encoded': 'false',
}
try:
response = requests.get(url, params=params, timeout=60)
response.raise_for_status()
data = response.json()
if 'paths' in data and len(data['paths']) > 0:
path = data['paths'][0]
distance_km = round(path.get('distance', 0) / 1000, 2)
points = path.get('points', {})
coordinates = points.get('coordinates', [])
return RouteType(
distance_km=distance_km,
geometry=coordinates,
)
except requests.RequestException as e:
logger.error("OpenRailRouting request failed: %s", e)
return None
def resolve_find_routes(self, info, from_uuid, to_uuid, limit=3):
"""Find K shortest routes through graph using ArangoDB K_SHORTEST_PATHS."""
db = get_db()
ensure_graph()
return Query._build_routes(db, from_uuid, to_uuid, limit)
def resolve_find_product_routes(self, info, product_uuid, to_uuid, limit_sources=3, limit_routes=3):
"""
Найти до N ближайших офферов и вернуть по одному маршруту:
авто -> (rail сколько угодно) -> авто. Поиск идёт от точки назначения наружу.
"""
db = get_db()
ensure_graph() # graph exists, но используем ручной обход
# Load destination node for distance sorting
nodes_col = db.collection('nodes')
dest = nodes_col.get(to_uuid)
if not dest:
logger.info("Destination node %s not found", to_uuid)
return []
dest_lat = dest.get('latitude')
dest_lon = dest.get('longitude')
if dest_lat is None or dest_lon is None:
logger.info("Destination node %s missing coordinates", to_uuid)
return []
max_sources = limit_sources or 5
max_routes = 1 # всегда один маршрут на оффер
# Helpers
def allowed_next_phase(current_phase, transport_type):
"""
Phases — расширение радиуса поиска, ЖД не обязателен:
- end_auto: можно 1 авто, rail, или сразу offer
- end_auto_done: авто использовано — rail или offer
- rail: любое кол-во rail, потом 1 авто или offer
- start_auto_done: авто использовано — только offer
Offer можно найти на любом этапе!
"""
if current_phase == 'end_auto':
if transport_type == 'offer':
return 'offer' # нашли сразу рядом
if transport_type == 'auto':
return 'end_auto_done'
if transport_type == 'rail':
return 'rail'
return None
if current_phase == 'end_auto_done':
if transport_type == 'offer':
return 'offer' # нашли после 1 авто
if transport_type == 'rail':
return 'rail'
return None
if current_phase == 'rail':
if transport_type == 'offer':
return 'offer' # нашли на ЖД станции
if transport_type == 'rail':
return 'rail'
if transport_type == 'auto':
return 'start_auto_done'
return None
if current_phase == 'start_auto_done':
if transport_type == 'offer':
return 'offer'
return None
return None
def fetch_neighbors(node_key, phase):
"""Получить соседей с учётом допустимых типов транспорта."""
# offer доступен на всех фазах — ищем ближайший
if phase == 'end_auto':
types = ['auto', 'rail', 'offer']
elif phase == 'end_auto_done':
types = ['rail', 'offer']
elif phase == 'rail':
types = ['rail', 'auto', 'offer']
elif phase == 'start_auto_done':
types = ['offer']
else:
types = ['offer']
aql = """
FOR edge IN edges
FILTER edge.transport_type IN @types
FILTER edge._from == @node_id OR edge._to == @node_id
LET neighbor_id = edge._from == @node_id ? edge._to : edge._from
LET neighbor = DOCUMENT(neighbor_id)
FILTER neighbor != null
RETURN {
neighbor_key: neighbor._key,
neighbor_doc: neighbor,
from_id: edge._from,
to_id: edge._to,
transport_type: edge.transport_type,
distance_km: edge.distance_km,
travel_time_seconds: edge.travel_time_seconds
}
"""
cursor = db.aql.execute(
aql,
bind_vars={
'node_id': f"nodes/{node_key}",
'types': types,
},
)
return list(cursor)
# Priority queue: (cost, seq, node_key, phase)
queue = []
counter = 0
heapq.heappush(queue, (0, counter, to_uuid, 'end_auto'))
visited = {} # (node, phase) -> best_cost
predecessors = {} # (node, phase) -> (prev_node, prev_phase, edge_info)
node_docs = {to_uuid: dest}
found_routes = []
expansions = 0
while queue and len(found_routes) < max_sources and expansions < Query.MAX_EXPANSIONS:
cost, _, node_key, phase = heapq.heappop(queue)
if (node_key, phase) in visited and cost > visited[(node_key, phase)]:
continue
# Если нашли оффер нужного товара в допустимой фазе, фиксируем маршрут
node_doc = node_docs.get(node_key)
if node_doc and node_doc.get('product_uuid') == product_uuid:
path_edges = []
state = (node_key, phase)
current_key = node_key
while state in predecessors:
prev_state, edge_info = predecessors[state]
prev_key = prev_state[0]
path_edges.append((current_key, prev_key, edge_info)) # from source toward dest
state = prev_state
current_key = prev_key
route = _build_route_from_edges(path_edges, node_docs)
distance_km = None
src_lat = node_doc.get('latitude')
src_lon = node_doc.get('longitude')
if src_lat is not None and src_lon is not None:
distance_km = _distance_km(src_lat, src_lon, dest_lat, dest_lon)
found_routes.append(ProductRouteOptionType(
source_uuid=node_key,
source_name=node_doc.get('name'),
source_lat=node_doc.get('latitude'),
source_lon=node_doc.get('longitude'),
distance_km=distance_km,
routes=[route] if route else [],
))
# продолжаем искать остальных
continue
neighbors = fetch_neighbors(node_key, phase)
expansions += 1
for neighbor in neighbors:
transport_type = neighbor.get('transport_type')
next_phase = allowed_next_phase(phase, transport_type)
if next_phase is None:
continue
travel_time = neighbor.get('travel_time_seconds')
distance_km = neighbor.get('distance_km')
neighbor_key = neighbor.get('neighbor_key')
node_docs[neighbor_key] = neighbor.get('neighbor_doc')
step_cost = travel_time if travel_time is not None else (distance_km or 0)
new_cost = cost + step_cost
state_key = (neighbor_key, next_phase)
if state_key in visited and new_cost >= visited[state_key]:
continue
visited[state_key] = new_cost
counter += 1
heapq.heappush(queue, (new_cost, counter, neighbor_key, next_phase))
predecessors[state_key] = ((node_key, phase), neighbor)
if not found_routes:
logger.info("No product routes found for %s -> %s", product_uuid, to_uuid)
return []
return found_routes
schema = graphene.Schema(query=Query)
# Helper methods attached to Query for route assembly
def _build_stage(from_doc, to_doc, transport_type, edges):
distance_km = sum(e.get('distance_km') or 0 for e in edges)
travel_time = sum(e.get('travel_time_seconds') or 0 for e in edges)
return RouteStageType(
from_uuid=from_doc.get('_key') if from_doc else None,
from_name=from_doc.get('name') if from_doc else None,
from_lat=from_doc.get('latitude') if from_doc else None,
from_lon=from_doc.get('longitude') if from_doc else None,
to_uuid=to_doc.get('_key') if to_doc else None,
to_name=to_doc.get('name') if to_doc else None,
to_lat=to_doc.get('latitude') if to_doc else None,
to_lon=to_doc.get('longitude') if to_doc else None,
distance_km=distance_km,
travel_time_seconds=travel_time,
transport_type=transport_type,
)
def _build_route_from_edges(path_edges, node_docs):
"""Собрать RoutePathType из списка ребёр (source->dest), схлопывая типы."""
if not path_edges:
return None
stages = []
current_edges = []
current_type = None
segment_start = None
for from_key, to_key, edge in path_edges:
edge_type = edge.get('transport_type')
if current_type is None:
current_type = edge_type
current_edges = [edge]
segment_start = from_key
elif edge_type == current_type:
current_edges.append(edge)
else:
# закрываем предыдущий сегмент
stages.append(_build_stage(
node_docs.get(segment_start),
node_docs.get(from_key),
current_type,
current_edges,
))
current_type = edge_type
current_edges = [edge]
segment_start = from_key
# Последний сгусток
last_to = path_edges[-1][1]
stages.append(_build_stage(
node_docs.get(segment_start),
node_docs.get(last_to),
current_type,
current_edges,
))
total_distance = sum(s.distance_km or 0 for s in stages)
total_time = sum(s.travel_time_seconds or 0 for s in stages)
return RoutePathType(
total_distance_km=total_distance,
total_time_seconds=total_time,
stages=stages,
)
# Bind helpers to class for access in resolver
Query._build_route_from_edges = _build_route_from_edges
def _distance_km(lat1, lon1, lat2, lon2):
"""Haversine distance in km."""
r = 6371
d_lat = math.radians(lat2 - lat1)
d_lon = math.radians(lon2 - lon1)
a = (
math.sin(d_lat / 2) ** 2
+ math.cos(math.radians(lat1))
* math.cos(math.radians(lat2))
* math.sin(d_lon / 2) ** 2
)
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
return r * c
Query._distance_km = _distance_km

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
if __name__ == '__main__':
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'geo.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)

View File

@@ -1,18 +0,0 @@
providers = ["python"]
[build]
[phases.install]
cmds = [
"python -m venv --copies /opt/venv",
". /opt/venv/bin/activate",
"pip install poetry==$NIXPACKS_POETRY_VERSION",
"poetry install --no-interaction --no-ansi"
]
[start]
cmd = "poetry run python manage.py collectstatic --noinput && poetry run python -m gunicorn geo.wsgi:application --bind 0.0.0.0:${PORT:-8000}"
[variables]
# Set Poetry version to match local environment
NIXPACKS_POETRY_VERSION = "2.2.1"

3732
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

25
package.json Normal file
View File

@@ -0,0 +1,25 @@
{
"name": "geo",
"version": "1.0.0",
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/index.js",
"dev": "tsx --watch src/index.ts"
},
"dependencies": {
"@apollo/server": "^4.11.3",
"@sentry/node": "^9.5.0",
"arangojs": "^9.2.0",
"cors": "^2.8.5",
"express": "^5.0.1",
"h3-js": "^4.2.1"
},
"devDependencies": {
"@types/cors": "^2.8.17",
"@types/express": "^5.0.0",
"@types/node": "^22.13.0",
"tsx": "^4.19.3",
"typescript": "^5.7.3"
}
}

View File

@@ -1,26 +0,0 @@
[project]
name = "geo"
version = "0.1.0"
description = "Geo service - logistics graph and routing"
authors = [
{name = "Ruslan Bakiev",email = "572431+veikab@users.noreply.github.com"}
]
requires-python = "^3.11"
dependencies = [
"django (>=5.2.8,<6.0)",
"graphene-django (>=3.2.3,<4.0.0)",
"django-cors-headers (>=4.9.0,<5.0.0)",
"python-arango (>=8.0.0,<9.0.0)",
"python-dotenv (>=1.2.1,<2.0.0)",
"infisicalsdk (>=1.0.12,<2.0.0)",
"gunicorn (>=23.0.0,<24.0.0)",
"whitenoise (>=6.7.0,<7.0.0)",
"sentry-sdk (>=2.47.0,<3.0.0)"
]
[tool.poetry]
package-mode = false
[build-system]
requires = ["poetry-core>=2.0.0,<3.0.0"]
build-backend = "poetry.core.masonry.api"

44
scripts/load-secrets.mjs Normal file
View File

@@ -0,0 +1,44 @@
import { InfisicalSDK } from "@infisical/sdk";
import { writeFileSync } from "fs";
const INFISICAL_API_URL = process.env.INFISICAL_API_URL;
const INFISICAL_CLIENT_ID = process.env.INFISICAL_CLIENT_ID;
const INFISICAL_CLIENT_SECRET = process.env.INFISICAL_CLIENT_SECRET;
const INFISICAL_PROJECT_ID = process.env.INFISICAL_PROJECT_ID;
const INFISICAL_ENV = process.env.INFISICAL_ENV || "prod";
const SECRET_PATHS = (process.env.INFISICAL_SECRET_PATHS || "/shared").split(",");
if (!INFISICAL_API_URL || !INFISICAL_CLIENT_ID || !INFISICAL_CLIENT_SECRET || !INFISICAL_PROJECT_ID) {
process.stderr.write("Missing required Infisical environment variables\n");
process.exit(1);
}
const client = new InfisicalSDK({ siteUrl: INFISICAL_API_URL });
await client.auth().universalAuth.login({
clientId: INFISICAL_CLIENT_ID,
clientSecret: INFISICAL_CLIENT_SECRET,
});
process.stderr.write(`Loading secrets from Infisical (env: ${INFISICAL_ENV})...\n`);
const envLines = [];
for (const secretPath of SECRET_PATHS) {
const response = await client.secrets().listSecrets({
projectId: INFISICAL_PROJECT_ID,
environment: INFISICAL_ENV,
secretPath: secretPath.trim(),
expandSecretReferences: true,
});
for (const secret of response.secrets) {
const escapedValue = secret.secretValue.replace(/'/g, "'\\''");
envLines.push(`export ${secret.secretKey}='${escapedValue}'`);
}
process.stderr.write(` ${secretPath.trim()}: ${response.secrets.length} secrets loaded\n`);
}
writeFileSync(".env.infisical", envLines.join("\n"));
process.stderr.write("Secrets written to .env.infisical\n");

60
scripts/load-vault-env.sh Executable file
View File

@@ -0,0 +1,60 @@
#!/bin/sh
set -eu
log() {
printf '%s\n' "$*" >&2
}
VAULT_ENABLED="${VAULT_ENABLED:-auto}"
if [ "$VAULT_ENABLED" = "false" ] || [ "$VAULT_ENABLED" = "0" ]; then
exit 0
fi
if [ -z "${VAULT_ADDR:-}" ] || [ -z "${VAULT_TOKEN:-}" ]; then
if [ "$VAULT_ENABLED" = "true" ] || [ "$VAULT_ENABLED" = "1" ]; then
log "Vault bootstrap is required but VAULT_ADDR or VAULT_TOKEN is missing."
exit 1
fi
exit 0
fi
if ! command -v curl >/dev/null 2>&1 || ! command -v jq >/dev/null 2>&1; then
log "Vault bootstrap requires curl and jq."
exit 1
fi
VAULT_KV_MOUNT="${VAULT_KV_MOUNT:-secret}"
load_secret_path() {
path="$1"
source_name="$2"
if [ -z "$path" ]; then
return 0
fi
url="${VAULT_ADDR%/}/v1/${VAULT_KV_MOUNT}/data/${path}"
response="$(curl -fsS -H "X-Vault-Token: $VAULT_TOKEN" "$url")" || {
log "Failed to load Vault path ${VAULT_KV_MOUNT}/${path}."
return 1
}
encoded_items="$(printf '%s' "$response" | jq -r '.data.data // {} | to_entries[]? | @base64')"
if [ -z "$encoded_items" ]; then
return 0
fi
old_ifs="${IFS}"
IFS='
'
for encoded_item in $encoded_items; do
key="$(printf '%s' "$encoded_item" | base64 -d | jq -r '.key')"
value="$(printf '%s' "$encoded_item" | base64 -d | jq -r '.value | tostring')"
export "$key=$value"
done
IFS="${old_ifs}"
log "Loaded Vault ${source_name} secrets from ${VAULT_KV_MOUNT}/${path}."
}
load_secret_path "${VAULT_SHARED_PATH:-}" "shared"
load_secret_path "${VAULT_PROJECT_PATH:-}" "project"

192
src/cluster.ts Normal file
View File

@@ -0,0 +1,192 @@
import { latLngToCell, cellToLatLng } from 'h3-js'
import { getDb } from './db.js'
const ZOOM_TO_RES: Record<number, number> = {
0: 0, 1: 0, 2: 1, 3: 1, 4: 2, 5: 2,
6: 3, 7: 3, 8: 4, 9: 4, 10: 5, 11: 5,
12: 6, 13: 7, 14: 8, 15: 9, 16: 10,
}
interface CachedNode {
_key: string
name?: string
latitude?: number
longitude?: number
country?: string
country_code?: string
node_type?: string
transport_types?: string[]
}
const nodesCache = new Map<string, CachedNode[]>()
function fetchNodes(transportType?: string | null, nodeType?: string | null): CachedNode[] {
const cacheKey = `nodes:${transportType || 'all'}:${nodeType || 'logistics'}`
if (nodesCache.has(cacheKey)) return nodesCache.get(cacheKey)!
const db = getDb()
let aql: string
if (nodeType === 'offer') {
aql = `
FOR node IN nodes
FILTER node.node_type == 'offer'
FILTER node.latitude != null AND node.longitude != null
RETURN node
`
} else if (nodeType === 'supplier') {
aql = `
FOR offer IN nodes
FILTER offer.node_type == 'offer'
FILTER offer.supplier_uuid != null
LET supplier = DOCUMENT(CONCAT('nodes/', offer.supplier_uuid))
FILTER supplier != null
FILTER supplier.latitude != null AND supplier.longitude != null
COLLECT sup_uuid = offer.supplier_uuid INTO offers
LET sup = DOCUMENT(CONCAT('nodes/', sup_uuid))
RETURN {
_key: sup_uuid,
name: sup.name,
latitude: sup.latitude,
longitude: sup.longitude,
country: sup.country,
country_code: sup.country_code,
node_type: 'supplier',
offers_count: LENGTH(offers)
}
`
} else {
aql = `
FOR node IN nodes
FILTER node.node_type == 'logistics' OR node.node_type == null
FILTER node.latitude != null AND node.longitude != null
RETURN node
`
}
// arangojs query returns a cursor — we need async. Use a sync cache pattern with pre-fetching.
// Since this is called from resolvers which are async, we'll use a different approach.
// Store a promise instead.
throw new Error('Use fetchNodesAsync instead')
}
export async function fetchNodesAsync(transportType?: string | null, nodeType?: string | null): Promise<CachedNode[]> {
const cacheKey = `nodes:${transportType || 'all'}:${nodeType || 'logistics'}`
if (nodesCache.has(cacheKey)) return nodesCache.get(cacheKey)!
const db = getDb()
let aql: string
if (nodeType === 'offer') {
aql = `
FOR node IN nodes
FILTER node.node_type == 'offer'
FILTER node.latitude != null AND node.longitude != null
RETURN node
`
} else if (nodeType === 'supplier') {
aql = `
FOR offer IN nodes
FILTER offer.node_type == 'offer'
FILTER offer.supplier_uuid != null
LET supplier = DOCUMENT(CONCAT('nodes/', offer.supplier_uuid))
FILTER supplier != null
FILTER supplier.latitude != null AND supplier.longitude != null
COLLECT sup_uuid = offer.supplier_uuid INTO offers
LET sup = DOCUMENT(CONCAT('nodes/', sup_uuid))
RETURN {
_key: sup_uuid,
name: sup.name,
latitude: sup.latitude,
longitude: sup.longitude,
country: sup.country,
country_code: sup.country_code,
node_type: 'supplier',
offers_count: LENGTH(offers)
}
`
} else {
aql = `
FOR node IN nodes
FILTER node.node_type == 'logistics' OR node.node_type == null
FILTER node.latitude != null AND node.longitude != null
RETURN node
`
}
const cursor = await db.query(aql)
let allNodes: CachedNode[] = await cursor.all()
if (transportType && (!nodeType || nodeType === 'logistics')) {
allNodes = allNodes.filter(n => (n.transport_types || []).includes(transportType))
}
nodesCache.set(cacheKey, allNodes)
console.log(`Cached ${allNodes.length} nodes for ${cacheKey}`)
return allNodes
}
export interface ClusterPoint {
id: string
latitude: number
longitude: number
count: number
expansion_zoom: number | null
name: string | null
}
export async function getClusteredNodes(
west: number, south: number, east: number, north: number,
zoom: number, transportType?: string | null, nodeType?: string | null,
): Promise<ClusterPoint[]> {
const resolution = ZOOM_TO_RES[Math.floor(zoom)] ?? 5
const nodes = await fetchNodesAsync(transportType, nodeType)
if (!nodes.length) return []
const cells = new Map<string, CachedNode[]>()
for (const node of nodes) {
const lat = node.latitude
const lng = node.longitude
if (lat == null || lng == null) continue
if (lat < south || lat > north || lng < west || lng > east) continue
const cell = latLngToCell(lat, lng, resolution)
if (!cells.has(cell)) cells.set(cell, [])
cells.get(cell)!.push(node)
}
const results: ClusterPoint[] = []
for (const [cell, nodesInCell] of cells) {
if (nodesInCell.length === 1) {
const node = nodesInCell[0]
results.push({
id: node._key,
latitude: node.latitude!,
longitude: node.longitude!,
count: 1,
expansion_zoom: null,
name: node.name || null,
})
} else {
const [lat, lng] = cellToLatLng(cell)
results.push({
id: `cluster-${cell}`,
latitude: lat,
longitude: lng,
count: nodesInCell.length,
expansion_zoom: Math.min(zoom + 2, 16),
name: null,
})
}
}
return results
}
export function invalidateCache(): void {
nodesCache.clear()
console.log('Cluster cache invalidated')
}

27
src/db.ts Normal file
View File

@@ -0,0 +1,27 @@
import { Database } from 'arangojs'
const ARANGODB_URL = process.env.ARANGODB_URL || process.env.ARANGODB_INTERNAL_URL || 'http://localhost:8529'
const ARANGODB_DATABASE = process.env.ARANGODB_DATABASE || 'optovia_maps'
const ARANGODB_PASSWORD = process.env.ARANGODB_PASSWORD || ''
let _db: Database | null = null
export function getDb(): Database {
if (!_db) {
const url = ARANGODB_URL.startsWith('http') ? ARANGODB_URL : `http://${ARANGODB_URL}`
_db = new Database({ url, databaseName: ARANGODB_DATABASE, auth: { username: 'root', password: ARANGODB_PASSWORD } })
console.log(`Connected to ArangoDB: ${url}/${ARANGODB_DATABASE}`)
}
return _db
}
export async function ensureGraph(): Promise<void> {
const db = getDb()
const graphs = await db.listGraphs()
if (graphs.some(g => g.name === 'optovia_graph')) return
console.log('Creating graph: optovia_graph')
await db.createGraph('optovia_graph', [
{ collection: 'edges', from: ['nodes'], to: ['nodes'] },
])
}

90
src/helpers.ts Normal file
View File

@@ -0,0 +1,90 @@
/** Haversine distance in km. */
export function distanceKm(lat1: number, lon1: number, lat2: number, lon2: number): number {
const R = 6371
const dLat = (lat2 - lat1) * Math.PI / 180
const dLon = (lon2 - lon1) * Math.PI / 180
const a =
Math.sin(dLat / 2) ** 2 +
Math.cos(lat1 * Math.PI / 180) * Math.cos(lat2 * Math.PI / 180) *
Math.sin(dLon / 2) ** 2
return R * 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a))
}
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export type ArangoDoc = Record<string, any>
export interface RouteStage {
from_uuid: string | null
from_name: string | null
from_lat: number | null
from_lon: number | null
to_uuid: string | null
to_name: string | null
to_lat: number | null
to_lon: number | null
distance_km: number
travel_time_seconds: number
transport_type: string | null
}
export interface RoutePath {
total_distance_km: number
total_time_seconds: number
stages: RouteStage[]
}
function buildStage(fromDoc: ArangoDoc | undefined, toDoc: ArangoDoc | undefined, transportType: string, edges: ArangoDoc[]): RouteStage {
const distance = edges.reduce((s, e) => s + (e.distance_km || 0), 0)
const time = edges.reduce((s, e) => s + (e.travel_time_seconds || 0), 0)
return {
from_uuid: fromDoc?._key ?? null,
from_name: fromDoc?.name ?? null,
from_lat: fromDoc?.latitude ?? null,
from_lon: fromDoc?.longitude ?? null,
to_uuid: toDoc?._key ?? null,
to_name: toDoc?.name ?? null,
to_lat: toDoc?.latitude ?? null,
to_lon: toDoc?.longitude ?? null,
distance_km: distance,
travel_time_seconds: time,
transport_type: transportType,
}
}
export function buildRouteFromEdges(pathEdges: [string, string, ArangoDoc][], nodeDocs: Map<string, ArangoDoc>): RoutePath | null {
if (!pathEdges.length) return null
// Filter offer edges — not transport stages
const filtered = pathEdges.filter(([, , e]) => e.transport_type !== 'offer')
if (!filtered.length) return null
const stages: RouteStage[] = []
let currentEdges: ArangoDoc[] = []
let currentType: string | null = null
let segmentStart: string | null = null
for (const [fromKey, , edge] of filtered) {
const edgeType = edge.transport_type as string
if (currentType === null) {
currentType = edgeType
currentEdges = [edge]
segmentStart = fromKey
} else if (edgeType === currentType) {
currentEdges.push(edge)
} else {
stages.push(buildStage(nodeDocs.get(segmentStart!), nodeDocs.get(fromKey), currentType, currentEdges))
currentType = edgeType
currentEdges = [edge]
segmentStart = fromKey
}
}
const lastTo = filtered[filtered.length - 1][1]
stages.push(buildStage(nodeDocs.get(segmentStart!), nodeDocs.get(lastTo), currentType!, currentEdges))
return {
total_distance_km: stages.reduce((s, st) => s + (st.distance_km || 0), 0),
total_time_seconds: stages.reduce((s, st) => s + (st.travel_time_seconds || 0), 0),
stages,
}
}

33
src/index.ts Normal file
View File

@@ -0,0 +1,33 @@
import express from 'express'
import cors from 'cors'
import { ApolloServer } from '@apollo/server'
import { expressMiddleware } from '@apollo/server/express4'
import * as Sentry from '@sentry/node'
import { typeDefs, resolvers } from './schema.js'
const PORT = parseInt(process.env.PORT || '8000', 10)
const SENTRY_DSN = process.env.SENTRY_DSN || ''
if (SENTRY_DSN) {
Sentry.init({
dsn: SENTRY_DSN,
tracesSampleRate: 0.01,
release: process.env.RELEASE_VERSION || '1.0.0',
environment: process.env.ENVIRONMENT || 'production',
})
}
const app = express()
app.use(cors({ origin: ['https://optovia.ru'], credentials: true }))
const server = new ApolloServer({ typeDefs, resolvers, introspection: true })
await server.start()
app.use('/graphql/public', express.json(), expressMiddleware(server) as unknown as express.RequestHandler)
app.get('/health', (_, res) => { res.json({ status: 'ok' }) })
app.listen(PORT, '0.0.0.0', () => {
console.log(`Geo server ready on port ${PORT}`)
console.log(` /graphql/public - public (no auth)`)
})

1027
src/schema.ts Normal file

File diff suppressed because it is too large Load Diff

19
tsconfig.json Normal file
View File

@@ -0,0 +1,19 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "dist",
"rootDir": "src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"declaration": true,
"declarationMap": true,
"sourceMap": true
},
"include": ["src"],
"exclude": ["node_modules", "dist"]
}