voyage-3
, voyage-3-large
, and voyage-3-lite
handle diverse text inputs. For specialized applications, Voyage provides models tailored to domains like code (voyage-code-3
), legal (voyage-law-2
), and finance (voyage-finance-2
), offering higher accuracy by capturing the context and semantics unique to each field. They also offer a multimodal model (voyage-multimodal-3
) capable of processing interleaved text and images. In addition, Voyage provides reranking models in standard and lite versions, each focused on optimizing relevance while keeping latency and computational load under control.voyage-3-large
model shows up to 20% improved retrieval accuracy over widely adopted production models across 100 datasets spanning domains like law, finance, and code. Despite its performance, it requires 200x less storage when using binary quantized embeddings. Domain-specific models like voyage-code-2 also outperform general-purpose models by up to 15% on code tasksrerank-lite-1
and rerank-1
deliver gains of up to 14% in precision and recall across over 80 multilingual and vertical-specific datasets. These improvements translate directly into better relevance, faster inference, and more efficient RAG pipelines at scale.Check out our previous article on how Software Development has changed with AI-Powered Code Editors.
For more details on the role of a design engineer and what it entails, be sure to check out this article in the designengineer.xyz blog—The Design Engineer.
Because of these trade-offs, modern web development stacks now include multiple render modes to minimize the negative effects of each mode.
@rendermode
directive, which child components inherit through the component hierarchy.@page "/static-example"
@rendermode StaticServer
<h1>Static Server Rendered Component</h1>
<p>This content is rendered on the server as static HTML.</p>
@page "/interactive-server"
@rendermode InteractiveServer
<button @onclick="UpdateMessage">Click me</button> @message
@code {
private string message = "Not updated yet.";
private void UpdateMessage()
{
// This executes on the server
message = "Updated on the server!";
}
}
@page "/interactive-wasm"
@rendermode InteractiveWebAssembly
<button @onclick="UpdateMessage">Click me</button> @message
@code {
private string message = "Not updated yet.";
private void UpdateMessage()
{
// This executes in the browser
message = "Updated on the client!";
}
}
@page "/auto-render"
@rendermode InteractiveAuto
<button @onclick="UpdateMessage">Click me</button> @message
@code {
private string message = "Not updated yet.";
private void UpdateMessage()
{
// Executes on server initially,
// then client on subsequent visits
message = "Updated with Auto mode!";
}
}
@inject PersistentComponentState ApplicationState
@code {
private string? data;
protected override void OnInitialized()
{
if (ApplicationState.TryTake("MyDataKey", out string? persistedData))
{
data = persistedData; // Restore persisted state
}
else
{
data = "Hello, world!"; // Initialize state
ApplicationState.Persist("MyDataKey", data); // Persist state
}
}
}
In this example, when the component is initialized it retrieves the persisted state if available using TryTake
. If no data is available, new data is initialized and saved for future use by the Persist
method. Following this logic, when the component is statically rendered for the first time, it will initialize data and store it in the persisted state. When the component becomes interactive, it will call OnInitialized
again thus rehydrating the state. This approach helps preserve the component’s state across pre-rendering and interactive modes.// Standard Angular component with client-side rendering
@Component({
selector: 'app-client-example',
template: `
<h1>Client-Side Rendered Component</h1>
<button (click)="updateMessage()">Click me</button>
<p>{{ message }}</p>
`
})
export class ClientExampleComponent {
message = 'Not updated yet';
updateMessage() {
this.message = 'Updated on the client!';
}
}
The client-side rendering example demonstrates a typical Angular component that renders entirely in the browser. When a user interacts with the button, the updateMessage()
method updates the component’s state client-side, changing the displayed message without any server interaction.# For new projects
ng new --ssr
# For existing projects
ng add @angular/ssr
Once SSR is enabled, Angular components work the same way, but they render first on the server. The same component code works in both client and server environments: // This component will first render on the server, then hydrate on the client
@Component({
selector: 'app-ssr-example',
template: `
<h1>Server-Side Rendered Content</h1>
<p>This renders on the server first, then becomes interactive</p>
<button (click)="updateMessage()">Click me</button>
<p>{{ message }}</p>
`
})
export class SsrExampleComponent {
message = 'Not updated yet';
updateMessage() {
this.message = 'Updated after hydration!';
}
}
The above server-side rendering example shows how the same component structure works with SSR enabled. The key difference is that with SSR:// In app.config.ts
import { ApplicationConfig } from '@angular/core';
import { provideClientHydration } from '@angular/platform-browser';
export const appConfig: ApplicationConfig = {
providers: [
provideClientHydration()
]
};
@defer
block and a variety of hydration triggers.import {
bootstrapApplication,
provideClientHydration,
withIncrementalHydration,
} from '@angular/platform-browser';
...
bootstrapApplication(AppComponent, {
providers: [provideClientHydration(withIncrementalHydration())]
});
This allows developers to optimize application performance. These triggers include hydrate on: idle (during browser idle time), viewport (when content becomes visible), interaction and hover (respond to user engagement), immediate (right after initial content renders), timer (after a specified delay), hydrate when (based on custom conditions) and hydrate never (permanently static). By strategically applying these triggers, developers can prioritize critical UI elements while deferring less important components, resulting in faster initial load times and improved user experience.@defer (hydrate on viewport) {
<large-cmp />
} @placeholder {
<div>Large component placeholder</div>
}
In Angular’s incremental hydration, nested deferred blocks follow a hierarchical hydration pattern where parent components must be hydrated before their children, creating a sequential code loading process. To maximize performance, developers should strategically position hydration boundaries around computationally expensive components, implement Angular Signals for efficient cross-boundary state management, design effective loading states with @placeholder content, consider zone-less change detection to reduce overhead, and fully utilize Angular Universal’s server-side rendering capabilities for optimal initial content delivery.'use client';
import { useState } from 'react';
export default function ClientComponent() {
const [message, setMessage] = useState('Not updated yet');
return (
<div>
<h1>Client Component</h1>
<button onClick={() => setMessage('Updated on client!')}>
Click me
</button>
<p>{message}</p>
</div>
);
}
The above code shows a React client component that runs entirely in the browser. The 'use client'
directive at the top explicitly marks it as client-side code, a convention introduced with React Server Components to distinguish between server and client rendering contexts. The component maintains state with useState and updates the message when the button is clicked, all on the client.// No 'use client' directive means this is a Server Component
import { getServerData } from '../lib/data';
import ClientComponent from './ClientComponent';
export default async function ServerComponent() {
// This runs on the server only
const data = await getServerData();
return (
<div>
<h1>Server Component</h1>
<p>Data from server: {data}</p>
{/* Server Components can render Client Components */}
<ClientComponent initialData={data} />
</div>
);
}
The code above demonstrates a React Server Component. Without the 'use client'
directive, this component runs exclusively on the server. It can directly access server resources and perform async operations like data fetching during the rendering process. The server renders the component with data already included and sends the resulting HTML to the client. As shown above, Server Components can seamlessly render Client Components, creating a hybrid rendering model where server-rendered content can include interactive client-side elements.hydrateRoot
:import { hydrateRoot } from 'react-dom/client';
import App from './App';
// Assumes the HTML was server-rendered and contains the App structure
hydrateRoot(document.getElementById('root'), <App />);
// Next.js page with static rendering
export function getStaticProps() {
return {
props: { data: 'This was rendered at build time' }
};
}
export default function Page({ data }) {
return <div>{data}</div>;
}
Astro: Focuses on content-driven websites with a unique “islands architecture” approach to hydration. // Astro component with React island
---
// Server-only code (runs at build time)
const title = "Welcome to Astro";
---
<html>
<body>
<h1>{title}</h1>
<!-- React component that hydrates on the client -->
<React.InteractiveComponent client:load />
</body>
</html>
While React has several frameworks for server rendering, Progress KendoReact provides a robust UI component library that works seamlessly with these server rendering solutions:// server.ts (created by Angular Universal)
import 'zone.js/node';
import { ngExpressEngine } from '@nguniversal/express-engine';
import * as express from 'express';
import { AppServerModule } from './src/main.server';
const app = express();
app.engine('html', ngExpressEngine({
bootstrap: AppServerModule,
}));
app.get('*', (req, res) => {
res.render('index', { req });
});
app.listen(4000);
Angular’s server rendering capabilities through Angular Universal are complemented by Kendo UI for Angular:Framework | Implementation | Pros | Cons |
---|---|---|---|
Blazor | Interactive WebAssembly | Works offline, reduces server load | Larger initial download, slower startup |
Angular | Traditional SPA | Rich interactivity, simpler development | Poorer SEO, slower initial render |
React | Client Components | Full interactivity, familiar model | Larger JS bundles, SEO challenges |
Framework | Implementation | Pros | Cons |
---|---|---|---|
Blazor | Interactive WebAssembly | Works offline, reduces server load | Larger initial download, slower startup |
Angular | Traditional SPA | Rich interactivity, simpler development | Poorer SEO, slower initial render |
React | Client Components | Full interactivity, familiar model | Larger JS bundles, SEO challenges |
Framework | Implementation | Pros | Cons |
---|---|---|---|
Blazor | InteractiveAuto | Best of both worlds, optimized for returning visitors | More complex, requires both server and client setup |
Angular | SSR with Hydration | Good SEO with full interactivity | Potential hydration mismatches |
React | Next.js with mixed components | Flexible, optimized per-page rendering | More complex mental model |
As powerful as they are convenient, modern rendering approaches make a great choice for new applications, enabling developers to build fast, interactive and SEO-friendly web experiences without compromising on functionality.
import { AIPrompt } from "@progress/kendo-react-conversational-ui";
Before introducing the AIPrompt component, let’s reconstruct our base Chat component so we have a functional chat UI as our foundation.import React, { useState } from "react";
import { Chat } from "@progress/kendo-react-conversational-ui";
const user = {
id: 1,
avatarUrl:
"https://demos.telerik.com/kendo-react-ui/assets/dropdowns/contacts/RICSU.jpg",
avatarAltText: "User Avatar",
};
const bot = { id: 0 };
const initialMessages = [
{
author: bot,
text: "Hello! I'm your AI assistant. How can I assist you today?",
timestamp: new Date(),
},
];
const App = () => {
const [messages, setMessages] = useState(initialMessages);
const handleSendMessage = (event) => {
setMessages((prev) => [...prev, event.message]);
const botResponse = {
author: bot,
text: "Processing your request...",
timestamp: new Date(),
};
setTimeout(() => {
setMessages((prev) => [...prev, botResponse]);
}, 1000);
};
return (
<Chat
user={user}
messages={messages}
onMessageSend={handleSendMessage}
placeholder="Type your message..."
width={400}
/>
);
};
export default App;
In the above code example, the Chat component provides the basic structure for user-bot interaction. It allows users to send messages and receive placeholder responses from the bot, simulating a functional chat interface.import {
AIPrompt,
AIPromptView,
AIPromptOutputView,
AIPromptCommandsView,
} from "@progress/kendo-react-conversational-ui";
Each of the components serves a specific purpose:const [activeView, setActiveView] = useState("prompt");
const [outputs, setOutputs] = useState([]);
const [loading, setLoading] = useState(false);
We’ll also create a function to switch between the prompt input and output view when a request is made:const handleActiveViewChange = (view) => {
setActiveView(view);
};
The above function will allow AIPrompt to switch views dynamically.handleOnRequest
function responsible for this:const handleOnRequest = async (prompt) => {
if (!prompt || loading) return; // Prevent empty or duplicate requests
setLoading(true);
// Placeholder for AI response while waiting
setOutputs([
{
id: outputs.length + 1,
title: prompt,
responseContent: "Thinking...",
},
...outputs,
]);
try {
const API_KEY = "YOUR_OPENAI_API_KEY"; // Replace with a valid API key
const API_URL = "https://api.openai.com/v1/chat/completions";
const response = await fetch(API_URL, {
method: "POST",
headers: {
Authorization: `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "gpt-4",
messages: [{ role: "user", content: prompt }],
}),
});
if (!response.ok) {
throw new Error(`HTTP error! Status: ${response.status}`);
}
const data = await response.json();
const aiResponse =
data.choices[0]?.message?.content || "Unable to process request.";
// Replace "Thinking..." with actual AI response
setOutputs((prevOutputs) =>
prevOutputs.map((output, index) =>
index === 0 ? { ...output, responseContent: aiResponse } : output
)
);
} catch (error) {
// Handle API errors
setOutputs([
{
id: outputs.length + 1,
title: prompt,
responseContent: "Error processing request.",
},
...outputs,
]);
} finally {
setLoading(false);
setActiveView("output"); // Switch to output view after processing
}
};
In the handleOnRequest
function, we’re utilizing OpenAI’s /v1/chat/completions endpoint to generate an AI-powered response. This endpoint enables us to send user messages to the model and receive a contextual reply. It takes in a conversation history structured as an array of messages, each marked by a role (user or assistant).<AIPrompt
style={{ width: "400px", height: "400px" }}
activeView={activeView}
onActiveViewChange={handleActiveViewChange}
onPromptRequest={handleOnRequest}
disabled={loading}
>
{/* Prompt Input UI */}
<AIPromptView
promptSuggestions={["Out of office", "Write a LinkedIn post"]}
/>
{/* AI Response Output UI */}
<AIPromptOutputView outputs={outputs} showOutputRating={true} />
{/* Commands View */}
<AIPromptCommandsView
commands={[
{ id: "1", text: "Simplify", disabled: loading },
{ id: "2", text: "Expand", disabled: loading },
]}
/>
</AIPrompt>
This will make our complete code example look like the following:import React, { useState } from "react";
import {
AIPrompt,
AIPromptView,
AIPromptOutputView,
AIPromptCommandsView,
} from "@progress/kendo-react-conversational-ui";
import { Chat } from "@progress/kendo-react-conversational-ui";
const user = {
id: 1,
avatarUrl:
"https://demos.telerik.com/kendo-react-ui/assets/dropdowns/contacts/RICSU.jpg",
avatarAltText: "User Avatar",
};
const bot = { id: 0 };
const App = () => {
const [activeView, setActiveView] = useState("prompt");
const [outputs, setOutputs] = useState([]);
const [loading, setLoading] = useState(false);
const handleActiveViewChange = (view) => {
setActiveView(view);
};
const handleOnRequest = async (prompt) => {
if (!prompt || loading) return;
setLoading(true);
const API_KEY = "YOUR_OPENAI_API_KEY"; // Replace with a valid API key
const API_URL = "https://api.openai.com/v1/chat/completions";
try {
setOutputs([
{
id: outputs.length + 1,
title: prompt,
responseContent: "Thinking...",
},
...outputs,
]);
const response = await fetch(API_URL, {
method: "POST",
headers: {
Authorization: `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "gpt-4",
messages: [{ role: "user", content: prompt }],
}),
});
if (!response.ok) {
throw new Error(`HTTP error! Status: ${response.status}`);
}
const data = await response.json();
const aiResponse =
data.choices[0]?.message?.content || "Unable to process request.";
setOutputs((prevOutputs) =>
prevOutputs.map((output, index) =>
index === 0 ? { ...output, responseContent: aiResponse } : output
)
);
} catch (error) {
setOutputs([
{
id: outputs.length + 1,
title: prompt,
responseContent: "Error processing request.",
},
...outputs,
]);
} finally {
setLoading(false);
setActiveView("output");
}
};
return (
<div
style={{ display: "flex", flexDirection: "column", alignItems: "center" }}
>
<Chat
user={user}
messages={outputs.map((output) => ({
author: bot,
text: output.responseContent,
}))}
width={400}
/>
<AIPrompt
style={{ width: "400px", height: "400px" }}
activeView={activeView}
onActiveViewChange={handleActiveViewChange}
onPromptRequest={handleOnRequest}
disabled={loading}
>
<AIPromptView
promptSuggestions={["Out of office", "Write a LinkedIn post"]}
/>
<AIPromptOutputView outputs={outputs} showOutputRating={true} />
<AIPromptCommandsView
commands={[
{ id: "1", text: "Simplify", disabled: loading },
{ id: "2", text: "Expand", disabled: loading },
]}
/>
</AIPrompt>
</div>
);
};
export default App;
You can also see the complete code example in the following StackBlitz playground link.
*.css
files, you need to add a link element in the head section of the Blazor application (usually the App.razor
file) referencing the .css
file:<link href="https://www.telerik.com@Assets["_content/ComponentLibrary/additionalStyles.css"]" rel="stylesheet">
Use the correct project name and filename to reference the standalone *.css
file.<link href="_content/ComponentLibrary/additionalStyles.css" rel="stylesheet">
To reuse routable (page) components, you need to provide a reference to the shared project assembly in the AdditionalAssemblies
parameter of the Router
definition inside the Routes.razor
file.AdditionalAssemblies="new[] { typeof(ComponentLibrary.Component1).Assembly }"
Images and JavaScript files stored in the public wwwroot
folder of the shared application can be referenced using the same naming pattern as used for CSS files.<img alt="Profile Picture" src="_content/ComponentLibrary/profile.png" />
This code references a profile.png
file from the wwwroot
folder of the ComponentLibrary
shared Razor class library project..csproj
file of the shared Razor class library.dotnet pack
command in your CI/CD setup to generate the NuGet package. You can then publish the generated package to your NuGet feed.// Option 1: Using the rendermode attribute
<HeadOutlet @rendermode="InteractiveServer" />
// Option 2: Using the @rendermode directive
@rendermode InteractiveServer
This way, you (re)use the same component in Blazor Server and Blazor WebAssembly projects.Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that, by accident or malice, can generate existential threats to humanity. Although Guillotine borrows some well-known virtualization techniques, Guillotine must also introduce fundamentally new isolation mechanisms to handle the unique threat model posed by existential-risk AIs. For example, a rogue AI may try to introspect upon hypervisor software or the underlying hardware substrate to enable later subversion of that control plane; thus, a Guillotine hypervisor requires careful co-design of the hypervisor software and the CPUs, RAM, NIC, and storage devices that support the hypervisor software, to thwart side channel leakage and more generally eliminate mechanisms for AI to exploit reflection-based vulnerabilities. Beyond such isolation at the software, network, and microarchitectural layers, a Guillotine hypervisor must also provide physical fail-safes more commonly associated with nuclear power plants, avionic platforms, and other types of mission critical systems. Physical fail-safes, e.g., involving electromechanical disconnection of network cables, or the flooding of a datacenter which holds a rogue AI, provide defense in depth if software, network, and microarchitectural isolation is compromised and a rogue AI must be temporarily shut down or permanently destroyed.