Two months ago at NeST Tech Fest 2025, I proudly showcased my personal project “SmartQueryBot” : a fusion of database tech and artificial intelligence.
During the exhibition, one enthusiastic developer asked me a powerful question:
“It’s great… “How and when would we actually use AI in our projects?”
To answer this, I’ve created a simple yet powerful example project called “vT.ExampleProjectForAI” that demonstrates a practical use case: an intelligent address entry system. I’ve uploaded the complete code to GitHub for you to explore.
The complete code is available on GitHub for you to explore and adapt. https://github.com/venuthomas/vT.ExampleProjectForAI
The Problem: Address Data is Complex
Building an address form seems straightforward until you consider the global complexity:
- There are approximately 195 countries in the world
- Each country has its own administrative divisions (states, provinces, territories)
- Each state/province contains numerous districts, counties, or municipalities
- Phone number formats vary widely between countries
- Postal/ZIP code formats follow different patterns globally
Traditionally, developers have three options:
- Maintain a massive database of geographical hierarchies (expensive and requires updates)
- Use third-party APIs (adds dependencies and potential costs)
- Implement a simplified, generic form (poor user experience)
The Solution: AI-Generated Address Data
What if we could dynamically generate accurate address data on demand? This is where AI shines!
I’ve built a .NET 9 Aspire application that uses AI to generate.
- Complete country lists
- States/provinces for any selected country
- Districts/municipalities for any selected state
- Proper validation patterns for phone numbers and postal codes
You can apply the same principles using your preferred language and frameworks, like TypeScript with Node.js, Python with Flask/Django, etc.
Technology Stack Overview
- .NET 9 Aspire: Simplifies development setup and orchestration of different services (API, Web UI, Caching, AI).
- Blazor: Powers the interactive web frontend.
- Redis: Used for efficient caching of frequently requested data (like the country list).
- Ollama (Local) / Azure OpenAI (Cloud): The AI providers generating the address data.
Project Structure
The solution is organized into logical components:
- vT.ExampleProjectForAI.ApiService: The backend API responsible for interacting with the AI and serving data.
- vT.ExampleProjectForAI.Core: Contains shared models, interfaces (like
IAIClient
), and AI client implementations. - vT.ExampleProjectForAI.Web: The Blazor frontend application consumed by the user.
- vT.ExampleProjectForAI.AppHost & vT.ExampleProjectForAI.ServiceDefaults: Standard .NET Aspire projects for application hosting and service configuration.
Configuration Flexibility: Choosing Your AI Provider
A key design goal is flexibility. You can easily switch between a local AI model (using Ollama) for development/offline use and a powerful cloud model (like Azure OpenAI) for production via appsettings.json
:
"AIServiceSettings": {
"UseAzureOpenAI": true,
"UseOllama": false,
"OllamaSettings": {
"OllamaEndpoint": "http://localhost:11434",
"OllamaModel": "llama3"
},
"AzureOpenAISettings": {
"AzureOpenAIEndpoint": "https://xxxxx.openai.azure.com/",
"AzureOpenAIKey": "key",
"AzureOpenAIDeployment": "gpt-4o-mini"
}
}
For Azure AI use, UseAzureOpenAI should be true
for Ollama use, UseOllama should be true
Security Warning: Never commit sensitive information like API keys directly into appsettings.json or source control in real-world projects. Use tools like .NET User Secrets, Azure Key Vault, environment variables, or other secure configuration providers.
This setup allows you to:
- Deploy to production using robust and scalable cloud models from Azure OpenAI.
- Develop locally using Ollama without incurring cloud costs.
Service Orchestration with Aspire
The AppHost
project uses .NET Aspire to define and manage the application’s components and their dependencies. Aspire handles starting Redis, the Ollama container (if configured locally), the backend API, and the frontend web app, ensuring they can communicate correctly.
var builder = DistributedApplication.CreateBuilder(args);
// Add Redis for caching
var cache = builder.AddRedis("vtAIRedisCache")
.WithDataVolume();
// Add Ollama AI service as a container
var ollama = builder.AddOllama("ollama", 11434)
.WithDataVolume()
.WithLifetime(ContainerLifetime.Persistent)
.WithOpenWebUI()
.AddModel("vtAIOllama", "llama3");
// Add API service with dependencies
var apiService = builder.AddProject<vT_ExampleProjectForAI_ApiService>("apiservice")
.WithHttpEndpoint(7020, name: "api-http")
.WithReference(cache)
.WithReference(ollama)
.WaitFor(cache)
.WaitFor(ollama)
.WithExternalHttpEndpoints();
// Add web frontend
builder.AddProject<vT_ExampleProjectForAI_Web>("webfrontend")
.WithHttpEndpoint(7021, name: "web-http", isProxied: false)
.WithExternalHttpEndpoints()
.WithReference(apiService)
.WaitFor(apiService);
// Build and run
builder.Build().Run();
AI Client Interface
To make switching between AI providers seamless, I created a simple interface:
using OpenAI.Chat; // Assuming usage of OpenAI SDK compatible types
namespace vT.ExampleProjectForAI.Core.AIClients;
public interface IAIClient
{
/// <summary>
/// Generates a completion based on the provided prompt.
/// </summary>
/// <param name="prompt">The instruction for the AI model.</param>
/// <returns>The AI-generated text response.</returns>
Task<string> GenerateCompletionAsync(string prompt);
/// <summary>
/// Generates a streaming completion based on the provided prompt.
/// </summary>
/// <param name="prompt">The instruction for the AI model.</param>
/// <returns>An asynchronous stream of chat message content updates.</returns>
IAsyncEnumerable<ChatMessageContent> GenerateCompletionStreamAsync(string prompt);
}
We then have concrete implementations (AzureAIClient
, OllamaClient
) that handle the specifics of communicating with each service. Dependency injection is used to provide the correct client based on the configuration.
public class AzureAIClient(IOptions<AIServiceSettings> appOptions) : IAIClient
// Implementation details...
public class OllamaClient(IOptions<AIServiceSettings> appOptions) : IAIClient
// Implementation details...
Azure client’s full code here:
using System.ClientModel;
using Azure;
using Azure.AI.OpenAI;
using Microsoft.Extensions.Options;
using OpenAI.Chat;
using vT.ExampleProjectForAI.Core.Models;
using ChatMessage = OpenAI.Chat.ChatMessage;
namespace vT.ExampleProjectForAI.Core.AIClients;
public class AzureAIClient(IOptions<AIServiceSettings> appOptions) : IAIClient
{
private readonly AIServiceSettings _apiSettings = appOptions.Value;
private AzureOpenAIClient _azureClient;
private ChatClient _chatClient;
public async Task<string> GenerateCompletionAsync(string prompt)
{
string response;
var _endpoint = _apiSettings.AzureOpenAISettings.AzureOpenAIEndpoint ??
throw new ArgumentNullException(nameof(_apiSettings.AzureOpenAISettings
.AzureOpenAIEndpoint));
var _apiKey = _apiSettings.AzureOpenAISettings.AzureOpenAIKey ??
throw new ArgumentNullException(nameof(_apiSettings.AzureOpenAISettings.AzureOpenAIKey));
var _deploymentName =
_apiSettings.AzureOpenAISettings.AzureOpenAIDeployment ??
throw new ArgumentNullException(nameof(_apiSettings.AzureOpenAISettings.AzureOpenAIDeployment));
try
{
_azureClient = new AzureOpenAIClient(
new Uri(_endpoint),
new ApiKeyCredential(_apiKey));
_chatClient = _azureClient.GetChatClient(_deploymentName);
var chatMsg = new List<ChatMessage>();
chatMsg.Add(prompt);
ChatCompletion completion = _chatClient.CompleteChat(chatMsg);
response = completion.Content[0].Text.Trim();
}
catch (RequestFailedException rfEx)
{
Console.WriteLine($"Azure OpenAI API request failed: {rfEx.Status} - {rfEx.Message}");
throw;
}
catch (Exception e)
{
Console.WriteLine($"An unexpected error occurred: {e.Message}");
throw;
}
return response;
}
public async IAsyncEnumerable<ChatMessageContent> GenerateCompletionStreamAsync(string prompt)
{
var _endpoint = _apiSettings.AzureOpenAISettings.AzureOpenAIEndpoint ??
throw new ArgumentNullException(nameof(_apiSettings.AzureOpenAISettings
.AzureOpenAIEndpoint));
var _apiKey = _apiSettings.AzureOpenAISettings.AzureOpenAIKey ??
throw new ArgumentNullException(nameof(_apiSettings.AzureOpenAISettings.AzureOpenAIKey));
var _deploymentName =
_apiSettings.AzureOpenAISettings.AzureOpenAIDeployment ??
throw new ArgumentNullException(nameof(_apiSettings.AzureOpenAISettings.AzureOpenAIDeployment));
_azureClient = new AzureOpenAIClient(
new Uri(_endpoint),
new ApiKeyCredential(_apiKey));
_chatClient = _azureClient.GetChatClient(_deploymentName);
var chatMsg = new List<ChatMessage>();
chatMsg.Add(prompt);
await foreach (var completionUpdate in _chatClient.CompleteChatStreamingAsync(chatMsg))
if (completionUpdate.ContentUpdate != null)
yield return completionUpdate.ContentUpdate;
}
}
Ollama client full code here
using System.Text;
using Microsoft.Extensions.Options;
using Newtonsoft.Json;
using OpenAI.Chat;
using vT.ExampleProjectForAI.Core.Models;
namespace vT.ExampleProjectForAI.Core.AIClients;
public class OllamaClient(IOptions<AIServiceSettings> appOptions) : IAIClient
{
private readonly AIServiceSettings _apiSettings = appOptions.Value;
private readonly HttpClient _httpClient = new();
public async Task<string> GenerateCompletionAsync(string prompt)
{
try
{
var _baseUrl = _apiSettings.OllamaSettings.OllamaEndpoint ??
throw new ArgumentNullException(nameof(_apiSettings.OllamaSettings.OllamaEndpoint));
var _modelName = _apiSettings.OllamaSettings.OllamaModel ??
throw new ArgumentNullException(nameof(_apiSettings.OllamaSettings.OllamaModel));
var request = new
{
model = _modelName,
promot = prompt,
stream = false
};
var content = new StringContent(JsonConvert.SerializeObject(request), Encoding.UTF8, "application/json");
_baseUrl = $"{_baseUrl}/api/generate";
var response = await _httpClient.PostAsync(_baseUrl, content);
response.EnsureSuccessStatusCode();
var responseBody = await response.Content.ReadAsStringAsync();
var responseObject = JsonConvert.DeserializeObject<OllamaResponse>(responseBody);
return responseObject?.Response ?? string.Empty;
return null;
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
public IAsyncEnumerable<ChatMessageContent> GenerateCompletionStreamAsync(string prompt)
{
return null;
}
}
API Endpoints
The system exposes three key endpoints for Address entry:
GET /api/address/getAllCountries
GET /api/address/getAllStatesByCountry/{countryName}
GET /api/address/getDistrictsByStateAndCountry/{countryName}/{stateName}
using System.Text.Json;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Caching.Distributed;
using vT.ExampleProjectForAI.Core.Models;
using vT.ExampleProjectForAI.Core.Services;
namespace vT.ExampleProjectForAI.ApiService.Controllers;
[Route("api/[controller]")]
[ApiController]
public class AddressController : ControllerBase
{
private readonly IAddressService _addressService;
private readonly DistributedCacheEntryOptions _cacheOptions;
private readonly IDistributedCache _distributedCache;
public AddressController(IAddressService addressService, IDistributedCache distributedCache)
{
_addressService = addressService;
_distributedCache = distributedCache;
_cacheOptions = new DistributedCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(86400) // 24 hours
};
}
[HttpGet("getAllCountries")]
public async Task<ActionResult<List<string>>> GetAllCountries()
{
const string cacheKey = "Countries";
// Try to get data from cache
var cachedData = await _distributedCache.GetStringAsync(cacheKey);
if (!string.IsNullOrEmpty(cachedData)) return Ok(JsonSerializer.Deserialize<List<string>>(cachedData));
// If not cached, get data from service
var countries = await _addressService.GetAllCountriesAsync();
// If data is not null and has countries, cache it
if (countries is not null && countries.Count > 0)
await _distributedCache.SetStringAsync(
cacheKey,
JsonSerializer.Serialize(countries),
_cacheOptions);
return Ok(countries);
}
[HttpGet("getAllStatesByCountry")]
public async Task<ActionResult<CountryData>> GetAllStatesByCountry(string countryName)
{
if (string.IsNullOrEmpty(countryName))
return BadRequest("Country name is required.");
var cacheKey = $"{countryName}:States";
// Try to get data from cache
var cachedData = await _distributedCache.GetStringAsync(cacheKey);
if (!string.IsNullOrEmpty(cachedData)) return Ok(JsonSerializer.Deserialize<CountryData>(cachedData));
// If not cached, get data from service
var states = await _addressService.GetAllStatesByCountryAsync(countryName);
// If data is not null and has states, cache it
if (states is not null && states.States is not null && states.States.Count > 0 &&
states.PhoneRegex is not null && states.ZipRegex is not null)
await _distributedCache.SetStringAsync(
cacheKey,
JsonSerializer.Serialize(states),
_cacheOptions);
return Ok(states);
}
[HttpGet("getDistrictsByStateAndCountry")]
public async Task<ActionResult<List<string>>> GetDistrictsByStateAndCountry(string countryName, string stateName)
{
if (string.IsNullOrEmpty(countryName) || string.IsNullOrEmpty(stateName))
return BadRequest("Country or State name is required.");
var cacheKey = $"{countryName}:{stateName}:Districts";
// Try to get data from cache
var cachedData = await _distributedCache.GetStringAsync(cacheKey);
if (!string.IsNullOrEmpty(cachedData)) return Ok(JsonSerializer.Deserialize<List<string>>(cachedData));
// If not cached, get data from service
var districts = await _addressService.GetDistrictsByStateAndCountryAsync(countryName, stateName);
// If data is not null and has districts, cache it
if (districts is not null && districts.Count > 0)
await _distributedCache.SetStringAsync(
cacheKey,
JsonSerializer.Serialize(districts),
_cacheOptions);
return Ok(districts);
}
}
Caching: Fetching the list of all world countries doesn’t change that often. Calling the AI every single time a user loads the form is inefficient and potentially costly. So, we cache the country list in Redis.
Note: We don’t cache states or districts by default in this example, as those lists might be requested less frequently or could be much larger. Caching strategies depend heavily on the specific use case.
Prompt Engineering: The Art of Asking the AI
The magic happens in the AddressService
. Here, we construct specific prompts to ask the AI for the data we need, crucially telling it to format the response as JSON.
Getting Countries (Simple List): We need a straightforward list of country names.
public async Task<List<string>> GetAllCountriesAsync()
{
List<string> countryLists = [];
try
{
// The Prompt: Clear instructions for format and content
var prompt = "Generate a list of world countries, sorted alphabetically. " +
"Format the response strictly as a JSON array of strings. " +
"Ensure the response contains *only* the JSON array and no other text, explanations, or formatting. " +
"Example: [\"Afghanistan\", \"Albania\", \"Algeria\"]";
// Call the selected AI client
var response = await _aiClients.GenerateCompletionAsync(prompt);
// Deserialize the JSON response
countryLists = JsonSerializer.Deserialize<List<string>>(response) ?? [];
}
catch (Exception e) { Console.WriteLine(e); /* Handle errors appropriately */ }
return countryLists;
}
Key Prompt Elements:
- Task: “Generate a list of world countries”
- Constraint: “sorted alphabetically”
- Format Specification: “strictly as a JSON array of strings”
- Exclusion: “contains only the JSON array and no other text…” (Crucial for reliable parsing!)
- Example: Provides a clear target format.
Getting States and Regex (JSON Object): Here, we need more than just a list. We need states, plus phone and zip code regex patterns for the given country, all packaged in a specific JSON object structure.
public async Task<CountryData> GetAllStatesByCountryAsync(string countryName)
{
var countryData = new CountryData();
try
{
// The Prompt: Ask for specific keys and formats in a JSON object
var prompt = $"Generate data for the country '{countryName}'. " +
$"The response must be a JSON object containing exactly three keys: " +
$"1. 'states': A JSON array of strings listing all states/provinces/regions within '{countryName}', sorted alphabetically. " +
$"2. 'phoneRegex': A string value representing the typical phone number validation regex pattern for '{countryName}'. " +
$"3. 'zipRegex': A string value representing the typical zip/postal code validation regex pattern for '{countryName}'. " +
$"Ensure the output contains *only* the raw JSON object and absolutely no other text, explanations, or formatting. " +
$"Example structure: {{ \"states\": [\"Alberta\", \"British Columbia\", ...], \"phoneRegex\": \"<regex_pattern_here>\", \"zipRegex\": \"<regex_pattern_here>\" }}";
// Call the AI
var response = await _aiClients.GenerateCompletionAsync(prompt);
// Deserialize the JSON object into our C# class
countryData = JsonSerializer.Deserialize<CountryData>(response);
}
catch (Exception e)
{
Console.WriteLine(e); // Handle errors appropriately
// Consider returning a default/empty CountryData or throwing
}
return countryData ?? new CountryData { States = [] }; // Ensure we don't return null
}
We define a simple C# class to hold this structured data:
using System.Text.Json.Serialization;
public class CountryData
{
[JsonPropertyName("states")]
public List<string> States { get; set; } = []; // Initialize to avoid nulls
[JsonPropertyName("phoneRegex")]
public string? PhoneRegex { get; set; }
[JsonPropertyName("zipRegex")]
public string? ZipRegex { get; set; } // Made nullable for safety
}
as per above object, example of response will be like
"Example structure: {{ \"states\": [\"Alberta\", \"British Columbia\", ...], \"phoneRegex\": \"<regex_pattern_here>\", \"zipRegex\": \"<regex_pattern_here>\" }}"
Key Prompt Elements:
- Context:
Generate data for the country '{countryName}'.
- Structure: “must be a JSON object containing exactly three keys: ‘states’, ‘phoneRegex’, ‘zipRegex’”
- Data Types/Formats: Specifies JSON array for states, string for regex.
- Regex Escaping: Explicitly tells the AI to escape backslashes within the regex strings it generates, which is important for correct JSON parsing and later use.
- Exclusion & Example: Reinforces the need for pure JSON output.
Getting Districts (Simple List again): Similar to countries, but specific to a state/country pair.
public async Task<List<string>> GetDistrictsByStateAndCountryAsync(string countryName, string stateName)
{
List<string> districtLists = [];
try
{
// The Prompt: Specific request for districts, sorted, JSON array format
var prompt = $"Generate a list of all districts (or equivalent administrative divisions like counties, municipalities, etc.) " +
$"within the state/region '{stateName}' in the country '{countryName}'. " +
$"The list must be sorted alphabetically. " +
$"Format the response strictly as a JSON array of strings. " +
$"Ensure the output contains *only* the raw JSON array and absolutely no other text, explanations, or formatting. " +
$"Example: [\"Central District\", \"North District\", \"South District\"]";
// Call the AI
var response = await _aiClients.GenerateCompletionAsync(prompt);
// Deserialize
districtLists = JsonSerializer.Deserialize<List<string>>(response) ?? [];
}
catch (Exception e) { Console.WriteLine(e); /* Handle errors */ }
return districtLists;
}
The Importance of Strict Formatting Instructions: Notice the repetition of “Ensure the output contains only the raw JSON…”. LLMs can be chatty. Explicitly telling them not to add explanations, markdown formatting, or introductory text is vital for reliable automated parsing of the response.
Smart Caching Strategy
Calling an AI for every request can be slow and costly. Caching is essential. Our strategy:
- Countries: Cached for a long duration (e.g., 24 hours) in Redis. The list of world countries changes very infrequently.
- States/Provinces & Regex: Cached for a medium duration (e.g., 6 hours). These change more often than countries but are still relatively stable. Caching the
CountryData
object avoids separate calls for states and regex patterns. - Districts/Municipalities: Cached for a medium duration (e.g., 6 hours). While more specific, caching still provides significant benefits if users frequently select the same state.
This tiered approach balances data freshness with performance and cost savings. Cache keys are specific (e.g., CountryData:Canada
, Districts:Canada:Ontario
) to avoid collisions.
The Seamless User Experience
From the user’s perspective, the complexity is hidden:
- Load: The form loads, triggering a call to
/api/address/getAllCountries
. The API checks Redis; if the list isn’t there, it asks the AI, caches the result, and returns the list to populate the country dropdown. - Select Country: The user selects a country (e.g., “Canada”). The frontend calls
/api/address/getAllStatesByCountry?countryName=Canada
. The API checks Redis forCountryData:Canada
. If missing, it asks the AI for Canadian states and regex patterns, caches theCountryData
object, and returns it. The state dropdown is populated, and the phone/zip input fields update their validation patterns internally. - Select State: The user selects a state (e.g., “Ontario”). The frontend calls
/api/address/getDistrictsByStateAndCountry?countryName=Canada&stateName=Ontario
. The API checks Redis forDistricts:Canada:Ontario
. If missing, it asks the AI, caches the list, and returns it to populate the district dropdown.
All this happens dynamically, driven by AI generation and optimized by caching, without the developer needing to manually curate and maintain vast geographical datasets.
When is This AI-Powered Approach Most Valuable?
This technique shines when:
- Data is Hierarchical & Complex: Addresses, product categories, organizational charts, biological taxonomies – anywhere with nested, variable structures.
- Data Changes Infrequently but Unpredictably: Administrative boundaries shift, new regions are formed. AI models are often trained on more recent data than static databases can easily keep up with.
- Comprehensive Global Data is Hard to Source: Finding accurate, complete, and consistently formatted datasets for all countries and their subdivisions is a significant challenge. AI can bridge this gap.
- Validation Rules Vary Widely: Phone number and postal code formats are prime examples where rules differ significantly across borders. Maintaining these manually is tedious and error-prone.
- Near-Perfect Accuracy is Sufficient: While LLMs are remarkably good at generating this type of data, they aren’t infallible, especially for very obscure or newly established regions. If absolute, mission-critical accuracy is required for every single entry, additional validation might be needed. For most standard address forms, the accuracy is more than adequate.
Important Considerations and Optimizations
While powerful, this approach requires careful thought:
- Cost Management: AI API calls (especially to cloud providers like Azure OpenAI) have associated costs based on usage (tokens processed).
- Mitigation: Implement aggressive caching (like Redis in our example) for frequently accessed, stable data (countries, states). Use local models (Ollama) for development and testing. Monitor API usage.
- Latency: AI responses inherently take longer (hundreds of milliseconds to seconds) than direct database lookups (milliseconds).
- Mitigation: Use asynchronous loading patterns in the UI (show loading spinners). Leverage caching heavily. Consider pre-fetching data if user behavior is predictable. Use streaming responses (
GenerateCompletionStreamAsync
) for potentially long lists to show partial results faster.
- Mitigation: Use asynchronous loading patterns in the UI (show loading spinners). Leverage caching heavily. Consider pre-fetching data if user behavior is predictable. Use streaming responses (
- Rate Limiting & Throttling: Both local (Ollama) and cloud AI services may have rate limits. High traffic could lead to throttled requests.
- Mitigation: Caching is the primary defense. Implement retry logic with exponential backoff in your AI client calls. Distribute load if necessary.
- Error Handling & Fallbacks: What happens if the AI service is down, returns an error, or provides malformed JSON?
- Mitigation: Implement robust
try-catch
blocks around AI calls and JSON parsing. Log errors effectively. Consider having a basic, default list (e.g., major countries) embedded in the application as a fallback. Return appropriate HTTP error codes from your API. The controller example includes basic error handling.
- Mitigation: Implement robust
- Prompt Reliability & Model Updates: The effectiveness relies heavily on well-crafted prompts. AI models also get updated, which could potentially (though usually improvingly) change response formats slightly.
- Mitigation: Test prompts thoroughly. Be very specific about the desired JSON structure. Have monitoring or tests that validate the format of AI responses periodically. Use versioned prompts if necessary.
- Data Validation (Optional but Recommended): For critical applications, you might want to cross-reference AI-generated data (especially regex patterns or less common regions) against a known, trusted source or perform sanity checks.
- Mitigation: Add secondary validation steps if the cost/complexity is justified by the application’s requirements.
- Security: Ensure API keys and sensitive configuration are handled securely using appropriate mechanisms (Key Vault, User Secrets, environment variables), not hardcoded or checked into source control.
Adapting to Other Languages & Frameworks
While this example uses .NET 9 Aspire, C#, and Blazor, the core concepts are universally applicable:
- JavaScript/TypeScript (Node.js/Frontend):
- Use official libraries like
openai
(for OpenAI/Azure OpenAI) or make direct HTTP requests to Ollama’s REST API usingfetch
oraxios
. - For caching, use server-side solutions like Redis (
ioredis
client) or Memcached, or client-side options likelocalStorage
/sessionStorage
for simpler cases (though less robust). - Manage API interactions within your backend framework (Express, NestJS, etc.) or directly in frontend frameworks (React, Vue, Angular) if building a client-heavy application (though backend orchestration is often preferred for managing keys and caching).
- Use official libraries like
- Python (Flask/Django):
- Use the
openai
Python library or libraries likerequests
for Ollama. - Integrate with Redis using clients like
redis-py
. - Structure the API endpoints within your chosen web framework.
- Use the
The prompt engineering principles – demanding specific JSON structures, requesting sorting, excluding extraneous text, providing examples – remain identical regardless of the programming language. The abstraction (IAIClient
interface) pattern is also valuable in any language supporting interfaces or similar concepts for easy provider switching.
Conclusion: Smarter Data Handling with AI
This intelligent address entry system demonstrates a practical, valuable application of AI in everyday software development. By leveraging LLMs for dynamic data generation, we can overcome the significant challenges of maintaining complex, global datasets manually.
Key Takeaways:
- AI for Data Generation: Use LLMs to generate structured, hierarchical, or variable data on demand when static databases are impractical.
- Prompt Engineering is Crucial: Craft clear, specific prompts demanding precise JSON output for reliable parsing.
- Caching is Non-Negotiable: Implement smart caching strategies (like Redis) to optimize performance, reduce latency, and manage costs.
- Abstract AI Providers: Use interfaces or similar patterns to easily switch between local and cloud AI models.
- Consider the Tradeoffs: Balance the benefits of dynamic generation against factors like latency, cost, and the need for potential fallbacks or validation.
If you have any more questions or need further clarification, feel free to ask!