The Ultimate Salesforce Model Builder Cheat Sheet

The Ultimate Salesforce Model Builder Cheat Sheet

Salesforce’s Einstein Studio offers a comprehensive, no-code solution for building and integrating AI models directly within Salesforce. This cheat sheet is designed to provide a quick guide to mastering Salesforce Model Builder, from creating models to integrating external AI platforms, and optimizing model performance for your business needs.

What You Will Learn:

  • How to create and customize AI models using Salesforce’s Model Builder.
  • Best practices for integrating external AI models from platforms like AWS and Google Vertex AI.
  • Key settings to optimize model accuracy and performance.
  • Steps for testing, monitoring, and refining your AI models.

Let’s get started!

What is Einstein Studio?

Einstein Studio is a powerful tool within Salesforce’s Data Cloud for creating, managing, and integrating AI models. It allows users to build AI models with no-code interfaces, connect to external platforms like AWS SageMaker and Google Vertex AI, or use existing large language models (LLMs) from providers like OpenAI. This flexibility empowers businesses to incorporate AI-driven insights and predictions directly into their Salesforce environment.

AD 4nXcMqKNTYBGteXPnLnbrs9ww jK7K0KrPsBE KYAoU 1PN XjrvVh3ReSzjKX7MEMByqiPpeJHbT1FNXHX3k AbLusjCH4VufJ4aiVxfK

Key Considerations:

  • Integration Options: You can either build AI models from scratch or connect to external AI models hosted on platforms like AWS, Google, or Azure.
  • No-Code Setup: Einstein Studio’s user-friendly interface allows users without technical expertise to create and manage models.
  • Model Types: Supports regression, binary classification, and more for various business needs like predicting opportunity amounts or customer churn.
  • External Model Dependency: Connected models remain hosted externally, and Salesforce accesses them via APIs, which may affect latency or reliability. 

Create Model from Scratch

Building a model from scratch using Einstein Studio allows you to create AI-powered predictions based on historical data. The process involves selecting and structuring data, defining the outcome, training the model, and refining variables for optimal predictive accuracy. This task enables businesses to leverage AI to forecast future outcomes, such as customer behavior, sales trends, or operational efficiencies.

Key Considerations:

  • Selecting Data Source: Ensure your data source has at least 400 rows and 3 relevant columns. Avoid overly complex datasets with more than 50 columns. Accuracy and completeness are crucial.
  • Outcome Variable Selection: Choose a measurable KPI or outcome variable that aligns with business goals, such as revenue or customer churn.
  • Training Data Subset: If needed, filter the training data to focus on the most relevant records, ensuring a balanced sample for better predictions.
  • Setting Goals: Clearly define whether to maximize or minimize the outcome variable based on your business objective, like maximizing revenue or minimizing churn.
  • Data Preparation: Include only relevant variables and use Autopilot if unsure which ones are most impactful. Exclude irrelevant or sensitive fields.
  • Algorithm Selection: Choose GLM for simple relationships, GBM for complex patterns, or XGBoost for large datasets and optimized performance.
  • Review and Edit: Double-check all setup steps before training, focusing on data source and outcome variables, as these can’t be changed later.
  • Data Privacy: Exclude personally identifiable information (PII) and ensure data governance and privacy standards are followed.

Insight:

Using the Autopilot feature within Einstein Studio can help in automatically selecting the most impactful variables for prediction. This ensures higher model performance, especially if you’re unfamiliar with which variables are most relevant.

Also Read – The Ultimate Salesforce Einstein Copliot Cheat Sheet

Bring Your Own Large Language Model (BYOLLM)

The Bring Your Own Large Language Model (BYOLLM) feature allows users to integrate external AI models with Salesforce via Einstein Studio. This capability enables organizations to connect their own foundation models, such as those hosted on Azure, OpenAI, or Google Vertex AI, to create custom generative AI configurations. Users can evaluate the model’s responses in the Model Playground before deploying it in production.

Key Considerations:

  • Model Integration: Ensure that your foundation model is hosted on a compatible external platform like Azure OpenAI or Google Vertex AI for seamless integration.
  • Endpoint Setup: When adding a foundation model, establish a secure endpoint connection between the external model provider and Salesforce for optimal performance.
  • Prompt Configuration: Utilize Prompt Builder to customize prompt templates and test various prompts with your integrated model before deploying it.
  • Monitoring: Use the monitoring dashboard to track model inference use and errors. Adjustments to prompts and configurations can improve overall model performance.
  • Editing Restrictions: Once a foundation model is linked to other model configurations, editing becomes limited. To modify the foundation model, you must first remove any associated configurations.
  • Model Usage Impact: Changing the foundation model may affect usage limits and costs, depending on the complexity and settings of the new model.

Insight:

The BYOLLM feature empowers businesses to integrate AI models tailored to their specific needs, offering flexibility in managing and testing models in a secure and scalable way. Before deploying a model, use the Model Playground to fine-tune responses, ensuring accuracy and relevance to your business requirements.

getgenerativeai

Salesforce Supported Models

Salesforce provides a wide range of Salesforce-managed Large Language Models (LLMs) that are pre-integrated into Einstein Studio for fast and efficient prompt creation. These models allow businesses to utilize AI-driven functionality across different applications, including Sales Emails and customer interactions. For more advanced needs, companies can leverage Bring Your Own Large Language Model (BYOLLM) to integrate custom models hosted on platforms like Azure and OpenAI.

The following Salesforce-managed models are available for quick integration via Prompt Builder:

  • Anthropic Claude 3 Haiku
  • Azure OpenAI GPT-3.5 Turbo
  • Azure OpenAI GPT-3.5 Turbo 16k
  • Azure OpenAI GPT-4 Turbo
  • OpenAI GPT-4
  • OpenAI GPT-4o
  • OpenAI GPT-4o mini
  • OpenAI GPT-4 32k
  • OpenAI GPT-4 Turbo

 For businesses with custom AI needs, BYOLLM supports the following additional models:

  • Anthropic Claude 3 Opus
  • Anthropic Claude 3 Sonnet
  • Anthropic Claude 3.5 Sonnet
  • Azure Open AI GPT-4o
  • Google Gemini 1.5 Pro
  • OpenAI GPT-4o

Create and Edit Model Configuration

In Einstein Studio, you can create a model configuration to customize how your foundation model operates for specific use cases. By adjusting hyperparameters like temperature, frequency, and presence penalties, you can influence how the model generates responses to prompts. 

Data masking ensures sensitive information is protected during testing, aligning with org-level privacy settings.

Key Considerations:

  • Temperature Setting: Adjust the temperature based on how creative or consistent you want the model’s responses to be.
  • Frequency Penalty: Use this setting to reduce repetitive phrases in responses.
  • Presence Penalty: Encourage diversity in token usage by increasing this value.
  • Data Masking: Always enable data masking to protect sensitive information, especially when testing with third-party models.
  • Hyperparameter Testing: Test various settings in the Model Playground to evaluate the model’s response quality.
  • Model Update: Editing model settings affects any associated prompts, so it’s best to create a new model if you need to change its configuration.
  • Limited Control for Some Models: Some settings, like temperature control, are not available for all foundation models.

Insight: 

The temperature setting is a key hyperparameter that controls the creativity of the model’s responses. Lower values (closer to 0) will result in more predictable, consistent answers, making it ideal for tasks that require precision or factual accuracy. Higher values (closer to 2) introduce more randomness, which can foster creativity but may lead to less coherent responses.

Also Read – The Ultimate Salesforce Prompt Builder Cheat Sheet

Test Model

Testing a model ensures that it performs as expected before being deployed. Using Model Playground, you can input prompts, adjust settings like data masking and response generation, and evaluate the outputs to refine the model’s responses. This phase helps to identify any necessary adjustments before moving into production.

Key Considerations:

  • Evaluate Hyperparameters: Adjust hyperparameters such as temperature, frequency, and presence penalties to refine responses.
  • Data Masking: Ensure sensitive information is properly masked during testing for better data security.
  • Debugging: Use the debug option to analyze how the model processes and generates responses.
  • Model Activity Monitoring: Keep track of model performance and inferences over time to catch any anomalies early.
  • Temporary Settings: Any adjustments to prompt settings during testing won’t be saved for future model configurations.
  • Error Management: Latency or biased responses may occur, affecting model reliability during testing.

 Things to Take Care of:

  1. Refine Hyperparameters: Adjust temperature to control response creativity, and use frequency/presence penalties to manage token usage.
  2. Evaluate Stop Sequence: Carefully configure stop sequences to ensure coherent message endings.
  3. Monitor Inferences: Keep an eye on inference usage to understand model efficiency and ensure it meets the performance requirements for the task.
  4. Handle Errors: Be prepared to debug any model output issues like biased or inaccurate responses, adjusting hyperparameters as necessary.
  5. Iterate Prompt Templates: After successful testing, use the generated outputs to create prompt templates that can be reused efficiently in production. 

Monitor Model Activity

View Model Inferences and Errors Monitoring model activity helps ensure that your AI models perform as expected and are fine-tuned over time to align with your business needs. By tracking model inferences and identifying potential errors, you can adjust and improve model accuracy and reliability.

Key Metrics to Track

Inferences: A foundation model, while pre-trained on large datasets, continues to learn and adapt based on prompts during inference. This ongoing interaction enhances its ability to perform specific language-related tasks. Monitoring inferences helps you assess how many predictions or responses the model has generated over a specific time period. This metric provides a snapshot of overall model activity, helping you measure usage and responsiveness.

Errors: Errors occur when a model generates incorrect or biased responses, faces latency issues, or encounters problems in scaling. Monitoring error rates enables you to identify where the model may need adjustment. Potential error types could include:

  • Inaccurate Responses: Misunderstanding or misinterpreting a prompt.
  • Biased Outputs: When the model generates responses with unwanted biases, potentially based on flawed training data.
  • Latency Issues: Slow response times or delays in generating outputs.
  • Scalability Concerns: The model may struggle to maintain performance as the number of inferences increases.

Conclusion

Mastering Salesforce’s Model Builder empowers your business to harness the full potential of AI, making data-driven decisions more efficient and impactful. Whether you’re building models from scratch or integrating external AI platforms, the key is optimizing performance through careful testing and monitoring.

Looking to streamline your Salesforce projects? GetGenerative.AI helps you create proposals in minutes, saving valuable time so you can focus on what matters most—driving results. 

Try it now and boost your productivity!