Can Public Geometry Support Prospective Comparison?
R.I.S.K. tests whether road shape, physics-based reasoning, and controlled assumptions can identify relative differences before collision-history data is introduced.
Road Risk converts public road geometry into an inspectable physics-based model. Select a road, derive its geometry, apply scenario assumptions, and compare the resulting model output against wider road-network samples.
Model outputs are comparative estimates based on public road geometry, physics-informed calculations, and selected assumptions. They are not official crash predictions or road-safety ratings.
Generated in the live app after a road is selected and scenario assumptions are set.
Distance, duration, sampled risk, hotspots, route overlays, and exports live inside the app.
Illustrative preview, not a reviewed case study. Static overview of the live application outputs: selected-road geometry, comparative model output, route context, and exportable evidence.
R.I.S.K. tests whether road shape, physics-based reasoning, and controlled assumptions can identify relative differences before collision-history data is introduced.
The live app uses public OSM-derived ways, nodes, tags, and selected-segment geometry through the public mapping data pipeline.
Coordinate geometry is converted into kinematic outputs that describe turning demand and stopping-distance sensitivity.
The same road can be retested under explicit assumptions to show how model output changes.
Outputs show whether a road appears typical, elevated, or unusually demanding within a sampled model context.
The model is not an official safety rating and has not been fully calibrated against national collision-history datasets.
The intended competition route is deliberately short: understand the research question, check the method, inspect saved results, then test the live model.
Start with why the project tests public road geometry and physics-informed assumptions before collision-history data is introduced.
Open the app, select a road, read the Annualised Comparative Model Output, safe speed, stopping distance, and data-confidence notes.
Use the dashboard to inspect saved model-output records while keeping the boundary clear: not observed crash rates, not official ratings.
The public site explains the same pipeline used by the live app. The goal is transparency: what is measured, what is derived, what is assumed, and what the output can responsibly mean.
Road Risk queries public map data and road geometry rather than relying on static screenshots or decorative map tiles.
The clicked road is projected onto its road polyline so length, heading, radius, and curvature can be derived from geometry.
Curvature, speed, friction, stopping distance, and safe-speed checks are interpreted through transparent formula blocks.
Weather, lighting, vehicle profile, behaviour, traffic exposure, and missing data are handled as explicit assumptions.
The result is a modelled comparison supported by graphs, maths, exports, and clear limits.
Open the live map and choose a location.
Click a visible road segment and check the highlighted geometry.
Start with annual output, safe speed, and stopping distance.
Adjust weather, vehicle, speed, visibility, and driver context.
Inspect how the current values are derived.
Use percentile context before interpreting a raw model output.
Preserve geometry, assumptions, values, and evidence.
Approximate lines across the live app and public project files.
Approximate live-app logic for map interaction, modelling, routes, graphs, and exports.
Approximate interface styling across the app and public research pages.
Iteration history used to develop, test, and refine the prototype.
Booklet-style development entries documenting the research and build sequence.
Unique visible controls, panels, outputs, and interface elements across the live application.
Scenario, route, graph, export, and interpretation controls exposed to the user.
Vehicle presets used to vary model assumptions and interpretation context.
Road tags considered across geometry, context, surface, speed, and infrastructure assumptions.
Default modelling values used to keep assumptions explicit and reviewable.
Scenario profiles for comparing how conditions alter the same road geometry.
Distribution views for percentile context, tail behaviour, and comparative interpretation.
Summary indicators used to interpret selected roads, distributions, and route-level outputs.
CSV, GeoJSON, JSON, distribution data, and PNG-style reporting outputs.
The live app surfaces several values because no single number should carry the full interpretation.
Scenario-adjusted model output for the selected road. It appears in the risk card, graphs, mathematical detail, and exports; it is not an observed crash rate or official assessment.
Converts the annual output into a daily-style view for interpretation. It does not turn the model into an observed daily crash rate.
Shows a simplified friction-limited speed estimate for the selected geometry and scenario assumptions.
Shows how speed, reaction time, and friction assumptions affect the distance needed to stop.
Summarises visible infrastructure/context clues and fallback assumptions. It is not an official road-standard rating.
Places the selected road relative to sampled roads under the current assumptions so raw model outputs are easier to interpret.
Route analysis can show mean model output and the highest sampled segment, helping identify local hotspots along a route.
Road Risk is designed as an inspectable modelling surface. It shows the result, but also the geometry, formula chain, scenario choices, confidence notes, and exportable evidence trail behind it.
The app does not claim to estimate an observed crash rate for an individual road. It provides a comparative model output that can support research, discussion, education, and early screening.
A scenario-adjusted comparative value used consistently across the risk panel, maths, graphs, and exports.
Curvature, friction, reaction time, and speed assumptions are translated into interpretable vehicle-motion checks.
A selected road can be compared against sampled roads nearby, making the output less isolated and more interpretable.
These compact examples mirror the physics language used on the Maths page and in the live app's method panels.
Segment length and heading change produce a curve-radius estimate.
Friction and radius create a simplified turning-speed check.
Speed, reaction time, and friction assumptions shape stopping demand.
Students can see curvature, speed, friction, stopping distance, and uncertainty on real roads rather than abstract textbook diagrams.
The model separates measured geometry, public tags, derived quantities, assumptions, and comparative outputs.
Proactive modelling can highlight roads worth inspecting while remaining clear that formal audits require more evidence.
CSV, GeoJSON, JSON, graph, and map outputs make the analysis easier to review, discuss, and reproduce.
Research question, hypothesis, what was built, responsible use, timeline, and judge workflow.
One consolidated guide to the model pipeline, scenario controls, data quality, and validation boundary.
Saved cases, graphs, evidence-quality notes, scenario comparisons, and clear interpretation boundaries.
Booklet-aligned sources, technical documentation, licensing, and attribution.
Select a road, change one assumption, inspect the output, and save or export a model-output case.
Select a road, inspect the model output, change assumptions, generate graphs, analyse routes, and export the same evidence trail described across this site.