Commitment to Responsible, Secure & Transparent AI
Our promise: Neural Hiive is committed to deploying Artificial Intelligence in a way that is safe, secure, transparent, and aligned with responsible innovation principles. This document addresses the most common questions about how our AI features are designed, governed, and operated within the GENii Pulse platform.
All capitalised terms not defined here carry the meanings assigned in the Neural Hiive Software Subscription Agreement or your governing customer contract. Neural Hiive combines Retrieval-Augmented Generation (RAG), vector search, and enterprise-grade large language models (LLMs) to enhance productivity and automation experiences. Customer Data remains isolated within each tenant and is never used to train shared models.
General Information
What types of AI technologies are used in Neural Hiive's AI Engine?
GENii Pulse AI features are built on a combination of enterprise-grade, purpose-selected technologies:
- Retrieval-Augmented Generation (RAG): Grounds AI responses in your organisation's own content, reducing hallucinations and improving accuracy.
- Natural Language Processing (NLP): Enables intuitive, conversational interactions across assistants and workflows.
- Supervised learning-based vector similarity search: Powers fast, semantically relevant content retrieval from your connected knowledge sources.
- Generative AI via third-party LLMs: Used strictly for inference with zero data retention and no training on Neural Hiive customer inputs.
All AI models are sourced from reputable enterprise providers and evaluated open-source ecosystems, subject to strict licensing, security, and privacy assessment before adoption.
Is the AI solution a product, a service, or in-house development?
GENii Pulse AI capabilities are a combination of proprietary Neural Hiive engineering and carefully vetted third-party components:
- Neural Hiive-built: RAG and vector search architecture, GENii Pulse UI, orchestration layers, and workflow automation engine.
- Third-party LLMs: Used exclusively for inference; providers are contractually restricted from storing or training on your data.
- Optional integrations: Supplemental generative tools (e.g., document generators, summarisers) that plug into the GENii Pulse platform with the same data protections applied.
What are the ethical and legal risks, and how are they mitigated?
Neural Hiive proactively identifies and manages ethical and legal risks across all AI features:
- Privacy-by-design architecture ensures tenant isolation and minimises data exposure at every layer of the stack.
- Zero-retention model selection prevents third-party providers from storing or training on Customer Data.
- Bias mitigation procedures are applied to prompt engineering and retrieval inputs to reduce skewed or harmful outputs.
- User guidance is built into the platform to set clear expectations around appropriate reliance on AI-generated content.
- Legal compliance with IP, copyright, and applicable data protection legislation is maintained as a baseline, not an afterthought.
Your Data & Neural Hiive AI Features
Two guarantees we stand behind: When you use Neural Hiive AI Features, your Usage Data and Customer Data — including all inputs and outputs — are never made available to other customers and are never used to train, re-train, or fine-tune any underlying LLM, whether in-house or third-party.
Every customer operates in a fully isolated environment. Your data cannot be accessed, referenced, or surfaced in another organisation's AI experience — ever.
Your inputs and outputs are used solely to fulfil your requests in real time. They are not fed back into any shared or foundational AI model.
Third-party LLMs process your queries strictly for inference purposes within a session. Providers are contractually bound to zero-retention obligations.
Enterprise customers may review data handling terms via their Software Subscription Agreement and Data Processing Agreement (DPA) to confirm these commitments in writing.
Data & Privacy Considerations
What type of data does Neural Hiive AI use?
Neural Hiive AI features use only the content you and your organisation provide within your tenant — such as documents, knowledge base articles, and user queries submitted through GENii Pulse. All data remains fully isolated within each customer's environment. No data is shared across tenants, and Customer Data is never used to train shared models.
How does the AI system process personal data?
Neural Hiive AI features do not require personally identifiable information (PII) to function. PII is excluded from vectorisation pipelines and LLM prompts by design. All AI processing remains covered under your existing data protection agreements with Neural Hiive. Any future AI feature that involves PII will require explicit customer opt-in before activation.
What safeguards are in place for data privacy?
- Customer Data stays within Neural Hiive's secure, isolated tenant environment at all times.
- All data is encrypted in transit (TLS 1.2+) and at rest (AES-256).
- Access is governed by strict role-based permissions and least-privilege principles.
- Zero-retention enterprise AI providers are contractually required to handle your data for inference only.
- Ongoing privacy risk assessments and privacy-by-design practices are embedded throughout the AI development lifecycle.
Security
How does Neural Hiive protect AI-generated content and associated data?
Security is embedded across all layers of the GENii Pulse AI stack:
- Encryption: Data is encrypted in transit using TLS and at rest using AES-256, including all vectors, embeddings, and cached outputs.
- Access controls: AI features enforce the same role-based access controls as the rest of the GENii Pulse platform. Users can only retrieve content they are already authorised to access.
- Audit logging: AI interactions are captured in audit logs to support governance, incident investigation, and compliance reporting.
- Infrastructure isolation: AI compute and storage resources are logically isolated per tenant, preventing cross-contamination of data or model context.
- Vendor due diligence: All third-party AI providers undergo security assessment and are required to maintain SOC 2 Type II certification or equivalent.
Security concern? If you discover a potential vulnerability or have a security question about Neural Hiive AI features, please contact us at security@neuralhiive.ai and we will respond promptly.
Transparency & Interpretability
What level of transparency is available for AI outputs?
Neural Hiive provides meaningful transparency into how GENii Pulse AI features operate:
- RAG transparency: The platform surfaces which tenant content was retrieved and used to inform a given AI response, enabling users to trace output provenance back to specific source documents.
- Structured prompting: Guardrails and structured prompt templates are used to keep AI behaviour predictable and aligned with your organisation's context.
- Third-party LLM opacity: The underlying LLMs themselves operate as black boxes — Neural Hiive cannot expose their internal reasoning steps, which is a limitation shared across the industry.
- Roadmap: Additional interpretability features, including confidence indicators and source attribution scoring, are on the GENii Pulse product roadmap.
How does Neural Hiive handle AI bias and fairness?
Neural Hiive takes a proactive stance on bias management:
- Prompt engineering guidelines are reviewed to reduce the risk of skewed or discriminatory outputs.
- Retrieval pipelines are designed to surface a representative range of content rather than over-indexing on recent or high-frequency material.
- We monitor for systematic output patterns that may indicate bias and update our guardrails accordingly.
- Customers are encouraged to report any AI output they believe reflects bias or inaccuracy, so we can investigate and improve.
Liability
What is the liability framework for AI-generated content?
Neural Hiive's liability commitments regarding AI-generated materials are governed by the customer's Software Subscription Agreement (MSA) established during procurement. The MSA sets out the scope of liability, indemnification terms, and any applicable service-level commitments related to AI feature availability and accuracy.
We recommend enterprise customers review the AI-specific provisions of their MSA with their legal team. If you do not yet have an MSA or would like to discuss AI-specific contractual terms, please contact your Neural Hiive account team.
Important note on AI outputs: Like all generative AI systems, GENii Pulse AI features may occasionally produce outputs that are incomplete, inaccurate, or contextually inappropriate. Users should apply professional judgement when acting on AI-generated content, particularly in high-stakes decisions. Neural Hiive provides user guidance within the platform to reinforce this.
Contact & Support
If you have further questions or feedback about AI trust and safety at Neural Hiive, we want to hear from you. Our team is committed to responding to all trust and safety enquiries promptly and thoroughly.
You may also reach your dedicated Neural Hiive account team directly for questions specific to your subscription, contractual terms, or Data Processing Agreement (DPA). Enterprise customers can request a copy of our DPA at legal@neuralhiive.ai .