.Make certain compatibility along with numerous frameworks, including.NET 6.0,. NET Framework 4.6.2, and.NET Criterion 2.0 as well as above.Decrease dependences to prevent version disagreements and the requirement for binding redirects.Recording Sound Files.One of the main capabilities of the SDK is actually audio transcription. Programmers may transcribe audio data asynchronously or even in real-time. Below is an example of exactly how to translate an audio data:.using AssemblyAI.using AssemblyAI.Transcripts.var client = brand new AssemblyAIClient(" YOUR_API_KEY").var records = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For nearby documents, similar code can be made use of to attain transcription.wait for making use of var stream = brand-new FileStream("./ nbc.mp3", FileMode.Open).var records = await client.Transcripts.TranscribeAsync(.stream,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK likewise holds real-time sound transcription utilizing Streaming Speech-to-Text. This attribute is actually especially valuable for treatments needing prompt handling of audio records.using AssemblyAI.Realtime.wait for making use of var transcriber = brand-new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Final: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for obtaining audio coming from a mic for instance.GetAudio( async (portion) => await transcriber.SendAudioAsync( chunk)).await transcriber.CloseAsync().Using LeMUR for LLM Functions.The SDK includes along with LeMUR to make it possible for developers to create large foreign language version (LLM) functions on vocal information. Listed here is an instance:.var lemurTaskParams = brand-new LemurTaskParams.Trigger="Offer a quick review of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var feedback = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Intelligence Models.Additionally, the SDK includes integrated support for audio knowledge styles, enabling sentiment analysis and various other state-of-the-art components.var transcript = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = correct. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// GOOD, NEUTRAL, or downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For additional information, visit the formal AssemblyAI blog.Image resource: Shutterstock.