Skip to main content

Intelligence Service

Intelligence Service is responsible with providing DataSapien's patented three tier on-edge intelligence approach. Intelligence Service allows access to:

  • Rules - these are first level of intelligence and static in nature
  • ML Models - conventional probabilistic models designed to work in a specific field
  • AI Models - Generative AI Models designed to work in a wide range of fields

Rules

Rules are designed on DataSapien Orchestrator and deployed to Mobile SDK instances. Mobile SDK evaluates rules on various times during app lifecycle:

  1. Upon collection / change of MeData values, this is triggered by MeData Service
  2. When SDK is initialised by host application
  3. When Intelligence Service Rule evaluation is called by scripts
  4. When Intelligence Service Rule evaluation is called by your host app
warning

Note that rule evaluation may be limited or completely impossible when your host app is in the background because of the limitations emposed by mobile operating systems.

Using ML Models

To use an ML model, first you must provision it on DataSapien Orchestrator. Each ML model has a unique programmatic name. You need to provide its unique name to Intelligence Service to invoke an ML Model.

info

If there is no available model for the name you provide, Intelligence Service will return an error indicating this. You can query and start download of ML models using Intelligence Service functions.

You can invoke ML model in one of two ways:

  • Directly from your host application: In this case Mobile SDK is just the delivery channel and wrapper for the ML models
  • From Scripts: In this case you can use ML models in any Mobile SDK provided use-case including Journeys & Exchanges.

Using AI Models

Simiary to ML models, AI models need to be provisioned on DataSapien Orchestrator first. Again, like ML models, each AI model has a unique programmatic name.

Invoking AI models are similar to invoking ML models, directly from your host app or via scripts.

Halucinations & Fact Checking

AI models, because of their generative nature, are prone to halucinations. DataSapien architecture allow you fact check AI output by:

  1. Writing scripts to implement simple accept / eject algorithms
  2. Using an ML model: feeding AI output to an ML model for fact checking
  3. Using an additional AI Model: feeding first AI output to an additional AI model

Intelligence Service Functions

To access IntelligenceService functions; get its instance from DataSapien object: DataSapien.getIntelligenceService().

Check if model downloaded on device####

Checks if model is downloaded or not.

Is model downloaded
// Signature
public func isModelDownloaded(modelName: String) -> Bool

// Usage
DataSapien.getIntelligenceService.isModelDownloaded("llama-3.2")

Download & Load model

Downloads the given model, if its already downloaded it loads.

Download & Load model
// Signature
public func load(modelName: String, status: @escaping @Sendable(Double) -> (), completion: @escaping @Sendable(ModelContainer) -> () , error: @escaping @Sendable(Error)->())

Downloaded model list

Returns model list that downloaded into host application.

Downloaded model list
// Signature
public func getDownloadedModelsList() -> [String]

Invoke model

Invokes the model with prompt.

Invoke model
// Signature
public func invoke(modelName: String, systemPrompt: String , streaming:@escaping @Sendable(String) -> () , completion:@escaping @Sendable(String) -> () , error:@escaping @Sendable(Error) -> ())

Unload/stop model

Stops and unloads the model from memory

Unload model
// Signature
public func stop()