Laravel: OpenAI resopnse streaming with server-sent events
Incorporating OpenAI into your Laravel project can greatly benefit from implementing streaming responses, enhancing user interaction. Due to the time required for GPT to generate text, instead of presenting a blank screen during this process, we can elevate the user experience by directly displaying the generated text. Here's an example of how it appears:
Understanding Server-Sent Events
Server-sent events enable servers to push updates to clients through a single, long-lived HTTP connection. It establishes a continuous stream of events, allowing the server to send data updates as individual events that the client handles in real-time. SSE offers a straightforward and lightweight solution for real-time communication, particularly suitable for applications requiring live data updates.
Choosing the Right Approach
Long polling demands considerable logic handling on both client and server sides, making it less favorable for this scenario. WebSockets, while effective for interactive communication, might be excessive for merely receiving generated tokens from GPT. Therefore, server-sent events stand out as the optimal choice, where the server directly transmits responses to the client.
Setting Up a Laravel Project with OpenAI
Begin by creating a new Laravel project:
laravel new laravel-openai-streaming
Next, add the Laravel OpenAI package:
composer require openai-php/laravel --with-all-dependencies
Then publish the configuration file:
php artisan vendor:publish --provider="OpenAI\Laravel\ServiceProvider"
Remember to include the OpenAI API key in your .env
file:
OPENAI_API_KEY=sk-...
To display text from our SSE, add the following HTML in welcome.blade.php
:
<section>
<div>
<p class="...">Laravel Streaming OpenAI</p>
<p>Streaming OpenAI Responses</p>
<p id="question"></p>
<p id="result"></p>
</div>
<form id="form-question">
<input
required
type="text"
name="input"
placeholder="Type here!"
/>
<button type="submit">
Submit
<span aria-hidden="true"> → </span>
</button>
</form>
</section>
Listening to Server-Sent Events with JavaScript
Handle form submission and listen for server-sent events using JavaScript:
const form = document.querySelector("form");
const result = document.getElementById("result");
form.addEventListener("submit", (event) => {
event.preventDefault();
const input = event.target.input.value;
if (input === "") return;
const question = document.getElementById("question");
question.innerText = input;
event.target.input.value = "";
const queryQuestion = encodeURIComponent(input);
const source = new EventSource("/ask?question=" + queryQuestion);
source.addEventListener("update", function (event) {
if (event.data === "<END_STREAMING_SSE>") {
source.close();
return;
}
result.innerText += event.data;
});
});
Handling Server-Sent Events in Laravel
Create a new controller named AskController
and register it in routes/web.php
:
php artisan make:controller AskController
Implement the server-side logic in AskController
to generate GPT responses and stream them using SSE:
<?php
use App\Http\Controllers\AskController;
use Illuminate\Support\Facades\Route;
Route::get('/', function () {
return view('welcome');
});
Route::get("/ask", AskController::class);
Since the don't specify the method name we will use __invoke
method to listen any incoming request.
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use OpenAI\Laravel\Facades\OpenAI;
class AskController extends Controller
{
public function __invoke(Request $request)
{
$question = $request->query('question');
return response()->stream(function () use ($question) {
$stream = OpenAI::completions()->createStreamed([
'model' => 'text-davinci-003',
'prompt' => $question,
'max_tokens' => 1024,
]);
foreach ($stream as $response) {
$text = $response->choices[0]->text;
if (connection_aborted()) {
break;
}
echo "event: update\n";
echo 'data: ' . $text;
echo "\n\n";
ob_flush();
flush();
}
echo "event: update\n";
echo 'data: <END_STREAMING_SSE>';
echo "\n\n";
ob_flush();
flush();
}, 200, [
'Cache-Control' => 'no-cache',
'X-Accel-Buffering' => 'no',
'Content-Type' => 'text/event-stream',
]);
}
}
Nginx Configuration
When deploying with Nginx, ensure proper configuration to unset the Connection
header and set proxy_http_version
to 1.1:
location ^~ /ask$ {
proxy_http_version 1.1;
add_header Connection '';
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
Conclusion
Integrating server-sent events in Laravel greatly enhances the user experience when using OpenAI models. By leveraging SSE, generated text from the OpenAI GPT model can be streamed to users in real-time, eliminating the need for them to wait for the complete response before seeing any content, improving application speed and interactivity.
I hope you enjoyed this article and learned something new.