Settings Results in 4 milliseconds

PowerShell From Microsoft Website
Category: Windows

Very detailed tutorials about Powershell Scripting for work automation.<a href=" ...


Views: 192 Likes: 87
Excel Programming
Category: Technology

If you program in Excel using VBA, I know that VB in Excel is no longer supported by Microsoft. They ...


Views: 358 Likes: 92
[Solved] How Resolve Suspected Database in Microso ...
Category: SQL

Question How do you remove the status of "Emergency" from the ...


Views: 168 Likes: 68
Microsoft Channel 9 for Developers
Category: Technology

Microsoft Channel 9 is the best Developer Channel on the Net. L ...


Views: 246 Likes: 78
Microsoft Office Training Page
Category: Technology

This data was edited just for testing.<a href ...


Views: 386 Likes: 85
[Solved]: Invalid version: 16. (Microsoft.SqlServe ...
Category: Other

Question How do you solve the error below? Invalid versi ...


Views: 0 Likes: 24
Job Opening - .NET Developer | Remote
Category: Jobs

Hello, I hope this message finds you well &ndash; take a look at the job description I&rs ...


Views: 0 Likes: 77
HTTP Error 502.5 - Process Failure: The current .N ...
Category: Network

Problem The current .Net SDK does not support targeting .NET Core 3.0. Either target .NET Core 2 ...


Views: 845 Likes: 108
Cannot consume scoped service Microsoft.AspNetCore ...
Category: .Net 7

Question How do you inject RoleManager in Asp.Net 6 Dependency Injection Container, when I do am ...


Views: 0 Likes: 51
Introducing Bash for Beginners
Introducing Bash for Beginners

A new Microsoft video series for developers learning how to script.According to Stack Overflow 2022 Developer Survey, Bash is one of the top 10 most popular technologies. This shouldn't come as a surprise, given the popularity of using Linux systems with the Bash shell readily installed, across many tech stacks and the cloud. On Azure, more than 50 percent of virtual machine (VM) cores run on Linux. There is no better time to learn Bash!Long gone are the days of feeling intimidated by a black screen with text known as a terminal. Say goodbye to blindly typing in “chmod 777” while following a tutorial. Say hello to task automation, scripting fundamentals, programming basics, and your first steps to working with a cloud environment via the bash command line.What we’ll be coveringMy cohost, Josh, and I will walk you through everything you need to get started with Bash in this 20-part series. We will provide an overview of the basics of Bash scripting, starting with how to get help from within the terminal. The terminal is a window that lets you interact with your computer’s operating system, and in this case, the Bash shell. To get help with a specific command, you can use the man command followed by the name of the command you need help with. For example, man ls will provide information on the ls command, which is used for listing directories and finding files.Once you’ve gotten help from within the terminal, you can start navigating the file system. You’ll learn how to list directories and find files, as well as how to work with directories and files themselves. This includes creating, copying, moving, and deleting directories and files. You’ll also learn how to view the contents of a file using the cat command.Another important aspect of Bash is environment variables. These are values that are set by the operating system and are used by different programs and scripts. In Bash, you can access these variables using the “$” symbol followed by the name of the variable. For example, $PATH will give you the value of the PATH environment variable, which specifies the directories where the shell should search for commands.Redirection and pipelines are two other important concepts in Bash. Redirection allows you to control the input and output of a command, while pipelines allow you to chain multiple commands together. For example, you can use the “>” symbol to redirect the output of a command to a file, and the “|” symbol to pipe the output of one command to the input of another.When working with files in Linux, you’ll also need to understand file permissions. In Linux, files have permissions that determine who can access them and what they can do with them. You’ll learn about the different types of permissionssuch as read, write, and execute, and how to change them using the chmod command.Next, we’ll cover some of the basics of Bash scripting. You’ll learn how to create a script, use variables, and work with conditional statements, such as "if" and "if else". You’ll also learn how to use a case statement, which is a way to control the flow of execution based on the value of a variable. Functions are another important aspect of Bash scripting, and you’ll learn how to create and use them to simplify your scripts. Finally, you’ll learn about loops, which allow you to repeat a set of commands multiple times.Why Bash mattersBash is a versatile and powerful language that is widely used. Whether you’re looking to automate tasks, manage files, or work with cloud environments, Bash is a great place to start. With the knowledge you’ll gain from this series, you’ll be well on your way to becoming a proficient Bash scripter.Many other tools like programming languages and command-line interfaces (CLIs) integrate with Bash, so not only is this the beginning of a new skill set, but also a good primer for many others. Want to move on and learn how to become efficient with the Azure CLI? Bash integrates with the Azure CLI seamlessly. Want to learn a language like Python? Learning Bash teaches you the basic programming concepts you need to know such as flow control, conditional logic, and loops with Bash, which makes it easier to pick up Python. Want to have a Linux development environment on your Windows device? Windows Subsystem for Linux (WSL) has you covered and Bash works there, too!While we won't cover absolutely everything there is to Bash, we do make sure to leave you with a solid foundation. At the end of this course, you'll be able to continue on your own following tutorials, docs, books, and other resources. If live is more your style, catch one of our How Linux Works and How to leverage it in the Cloud Series webinars. We'll cover a primer on How Linux Works, discuss How and when to use Linux on Azure, and get your developer environment set up with WSL.This Bash for Beginners series is part of a growing library of video series on the Microsoft Developer channel looking to quickly learn new skills including Python, Java, C#, Rust, JavaScript and more.Learn more about Bash in our Open Source communityNeed help with your learning journey?Watch Bash for Beginners Find Josh and myself on Twitter. Share your questions and progress on our Tech Community, we'll make sure to answer and cheer you on. The post Introducing Bash for Beginners appeared first on Microsoft Open Source Blog.


Get Rid Of Black Blinking Cursor MSSMS
Category: Databases

If you have a blinking big black cursor showing in Microsoft SQL Server Studio, just press "Insert" ...


Views: 379 Likes: 112
Visual Studio Short Cuts
Category: Technology

1. Copy JSON Payload and paste it into Visual Studio 2019 C# class and it will convert the JSON p ...


Views: 289 Likes: 91
Access Database [File is open in another program]
Category: Databases

When working with Access Database, sometimes the file (Microsoft Access) can get orphaned by a proce ...


Views: 256 Likes: 88
Improve BERT inference speed by combining the power of Optimum, OpenVINO™, ONNX Runtime, and Azure
Improve BERT inference speed by combining the powe ...

In this blog, we will discuss one of the ways to make huge models like BERT smaller and faster with OpenVINO Neural Networks Compression Framework (NNCF) and ONNX Runtime with OpenVINO Execution Provider through Azure Machine Learning.Azure Machine LearningBusiness-critical machine learning models at scale.Learn moreBig models are slow, we need to make them fasterToday’s best-performing language processing models use huge neural architectures with hundreds of millions of parameters. State-of-the-art transformer-based architectures like BERT are available as pretrained models for anyone to use for any language task.The big models have outstanding accuracy, but they are difficult to use in practice. These models are resource hungry due to a large number of parameters. These issues become worse when serving the fine-tuned model and it requires a lot of memory and time to process a single message. A state-of-the-art model is not good if it can handle only one message per second. To improve the throughput, we need to accelerate the well-performing BERT model, by reducing the computation or the number of operations with the help of quantization.Overview of Optimum Intel and quantization aware trainingOptimum Intel is an extension for the Hugging Face Optimum library with OpenVINO runtime as a backend for the Transformers architectures. It also provides an interface to Intel NNCF (Neural Network Compression Framework) package. It helps implement Intel's optimizations through NNCF with changes to just a few lines of code in the training pipeline.Quantization aware training (QAT) is a widely used technique for optimizing models during training. It inserts nodes into the neural network during training that simulates the effect of lower precision. This allows the training algorithm to consider quantization errors as part of the overall training loss that gets minimized during training. QAT has better accuracy and reliability than carrying out quantization after the model has been trained. The output after training with our tool is a quantized PyTorch model, ONNX model, and IR.xml.Overview of ONNXRuntime, and OpenVINO Execution ProviderONNX Runtime is an open source project that is designed to accelerate machine learning across a wide range of frameworks, operating systems, languages, and hardware platforms. It enables the acceleration of machine learning inferencing across all of your deployment targets using a single set of APIs.Intel and Microsoft joined hands to create the OpenVINO Execution Provider (OVEP) for ONNX Runtime, which enables ONNX models for running inference using ONNX Runtime APIs while using the OpenVINO Runtime as a backend. With the OpenVINO Execution Provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic acceleration on Intel CPU, GPU, and VPU. Now you've got a basic understanding of quantization, ONNX Runtime, and OVEP, let’s take the best of both worlds and stitch the story together.Putting the tools together to achieve better performanceIn our next steps, we will be doing quantization aware training using Optimum-Intel and Inference using Optimum-ORT with OpenVINO Execution Provider through Azure Machine Learning. Optimum can be used to load optimized models from the Hugging Face Hub and create pipelines to run accelerated inferences.Converting PyTorch FP32 model to INT8 ONNX model with QATWhen utilizing the Hugging Face training pipelines all you need is to update a few lines of code and you can invoke the NNCF optimizations for quantizing the model. The output of this would be an optimized INT8 PyTorch model, ONNX model, and OpenVINO IR. See the flow diagram belowFor this case study, we have chosen the bert-squad pretrained model from Hugging Face. This has been pretrained on the SQuAD dataset for the question-answering use case. QAT can be applied by replacing the Transformers Trainer with the Optimum (OVTrainer). See belowfrom trainer_qa import QuestionAnsweringOVTrainerRun the training pipeline1. Import OVConfigfrom optimum.intel.openvino import OVConfigfrom trainer_qa import QuestionAnsweringOVTrainer2. Initialize a config from the ov_config = OVConfig() 3. Initialize our Trainer trainer = QuestionAnsweringOVTrainer()Comparison of FP32 model and INT8 ONNX model with Netron model visualization toolWhen compared with FP32, the INT8 model has QuantizeLinear and DequantizeLinear operations added to mimic the lower precision after the QAT.Fig1 FP32 modelFig2 INT8 modelTo replicate this example check out the reference code with detailed instructions on QAT and Inference using OpenVINO and Azure Machine Learning Jupyter Notebooks on GitHub.Performance improvement resultsAccuracyOriginal FP32QAT INT8ExplanationF193.192.83In this case, it’s computed over the individual words in the prediction against those in the True Answer. The number of shared words between the prediction and the truth is the basis of the F1 score precision is the ratio of the number of shared words to the total number of words in the prediction, and recall is the ratio of the number of shared words to the total number of words in the ground truth.Eval_exact86.9186.94This metric is as simple as it sounds. For each question + answer pair, if the characters of the model’s prediction exactly match the characters of (one of) the True Answer(s), EM = 1, otherwise EM = 0. This is a strict all-or-nothing metric; being off by a single character results in a score of 0. When assessing against a negative example, if the model predicts any text at all, it automatically receives a 0 for that example.Comparison of ONNXRUNTIME_PERF_TEST application for ONNX-FP32 and ONNX-INT8 modelsWe've used ONNXRuntime APIs for running inference for the BERT model. As you can see the performance for the INT8 optimized model improved almost to 2.95x when compared to FP32 without much compromise in the accuracy.Quantized PyTorch, ONNX, and INT8 models can also be served using OpenVINO Model Server for high-scalability and optimization for Intel solutions so that you can take advantage of all the power of the Intel Xeon processor or Intel's AI accelerators and expose it over a network interface.Optimize speed and performance As neural networks move from servers to the edge, optimizing speed and size becomes even more important. In this blog, we gave an overview of how to use open source tooling to make it easy to improve performance.ReferencesEnhanced Low-Precision Pipeline to Accelerate Inference with OpenVINO toolkit.Developer Guide Model Optimization with the OpenVINO Toolkit. Evaluating QA Metrics, Predictions, and the Null Response.SW/HW configurationFramework configuration ONNXRuntime, Optimum-Intel [NNCF]Application configuration ONNXRuntime, EP OpenVINO ./onnx_perf_test OPENVINO 2022.2 ./benchmark_appInput Question and contextApplication Metric Normalized throughputPlatform Intel Icelake-8380Number of Nodes 2Number of Sockets 2CPU or Accelerator Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHzCores/socket, Threads/socket or EU/socket 40,2ucode 0xd000375HT EnabledTurbo EnabledBIOS Version American Megatrends International, LLC. V1.4System DDR Mem Config slots / cap / run-speed 32/32 GB/3200 MT/sTotal Memory/Node (DDR+DCPMM) 1024GBStorage boot INTEL_SSDSC2KB019T8 1.8TNIC 2 x Ethernet Controller X710 for 10GBASE-TOS Ubuntu 20.04.4 LTSKernel 5.15.0-46-genericThe post Improve BERT inference speed by combining the power of Optimum, OpenVINO™, ONNX Runtime, and Azure appeared first on Microsoft Open Source Blog.


Retraining Machine Learning Model in Microsoft Mac ...
Category: Machine Learning

System.InvalidCastException &nbsp; Message=Unable to cast object of type 'Microsoft.ML.Da ...


Views: 563 Likes: 81
Command-Line Switches For Microsoft Access
Category: Databases

<h2 style="font-size 3em; color #2f2f2f; margin-bottom 20px; font-family wf_segoe-ui_light, &quo ...


Views: 276 Likes: 84
Microsoft SQL Server, Error: 258
Category: SQL

Error A network-related or instance-specific error occurred while establishing ...


Views: 492 Likes: 102
Microsoft Access Tutorials
Category: Databases

Microsoft Access T ...


Views: 303 Likes: 92
Docker Container Micro-Service Error: Can not Conn ...
Category: Docker

Problem Can not Connect to SQL Server in Docker Container from Microsoft Sql Server Management</ ...


Views: 257 Likes: 90
pull access denied for microsoft/mssql-server-lin ...
Category: Docker-Compose

Question Why is this error happening? " pull access denied for microsoft/mssql-server-linux, rep ...


Views: 0 Likes: 49
We're unable to complete your request invalid_requ ...
Category: SQL

We're unable to ...


Views: 4131 Likes: 150
StreamJsonRpc.RemoteInvocationException: Cannot fi ...
Category: Tools

Question There is a warning ale ...


Views: 0 Likes: 53
iis just in time debugging error
Category: Servers

Question Why is IIS Just-In-Time debugging error popping up when testing <a cl ...


Views: 0 Likes: 34
Keyword or statement option 'bulkadmin' is not sup ...
Category: SQL

Question I am getting SQL Server Error Keyword or statement option 'bulkadmin' is not supported ...


Views: 0 Likes: 47
Making culture count for Open Source sustainability—Celebrating FOSS Fund 25
Making culture count for Open Source sustainabilit ...

Microsoft cares about open source sustainabilitythrough its membership across multiple initiatives and foundations to ongoing empowerment efforts to encourage and reward contributions and beyond. Building a culture where every employee can visualize and embrace their responsibility to upstream projects is at the forefront of the Open Source Program’s Office (OSPO) work, which embodies the goals of Microsoft’s FOSS Fund.Building on the work of othersIn the spirit of open source, this work builds on the work of our peers, specifically the FOSS Fund model created by Indeed, and with ongoing collaboration with TODO Group members working on similar goals for supporting open source in their companies. At Microsoft, FOSS Fund is an employee-driven effort that builds awareness of open source sustainability through giving. The fund awards $10,000 USD each month to open source projects nominated by employees. Since the program's launch nearly two years ago, 34 projects have been selected, as determined by thousands of employee votes. While we don’t track how funds are used, some projects have shared that they used the funds for everything from sponsoring a contributor to creating brand assets, attending events, and covering technology and subscription expenses.Creating visibility for Open Source projects, maintainers, and their impactTo date, Microsoft's FOSS Fund has been awarded to small projects with big impact Syn and ajv, as well as larger, foundational projects with established communities like curl, Network Time Protocol (NTP), and webpack. We were proud to see employees nominating and voting for projects with impact for accessibility and inclusion like Chayn, Optikey, and NVDA. Beyond size and impact nominations spanned a range of ecosystems including gaming with Godot Engine, and mapping with the much-loved OpenStreetMap project. Employee nominations helped surface and rally support for a vast range of open technology making software better, more secure, faster, easier to document, easier to test, easier to query with projects like dbatools, OpenSSL, Babel, rust-analyzer, Reproducible Builds,QEMU, Grain and mermaid-js. Celebrating and looking forwardTo celebrate FOSS Fund 25 we invited all employees whose projects were not selected in previous FOSS Funds to propose a project for a one-time $500.00 award. This resulted in over 40 more projects and project maintainers receiving this microgrant over the last few days (with 2 still to be issued). Additionally for 2023, we will strive to grow our impact on, and to be more intentional about funding inclusion. To that end, we will add a new D&I track to the FOSS Fund with awards directed towards projects having impact on diversity and inclusion, or to efforts within upstream projects (like working groups) working on D&I efforts. The new track will run alternate months. We hope this will continue to build a culture of awareness and responsibility for open source sustainability. If you or your organization are interested in building your own FOSS Fund you can check out Indeed's free resource. If you are interested in collaborating on, or have ideas for impacting diversity and inclusion through such a program, please reach out to me, or join the TODO group Slack channel and say hello!The post Making culture count for Open Source sustainability—Celebrating FOSS Fund 25 appeared first on Microsoft Open Source Blog.


How to Connect to Azure Lunux Virtual Machine Usin ...
Category: Servers

Question When I try to log into Microsoft Azure Portal Linux Virtual Machine us ...


Views: 0 Likes: 40
How to Insert two corresponding columns into a tem ...
Category: Other

Question How do you insert two columns corresponding to each other in a temp ta ...


Views: 0 Likes: 9
VBA Microsoft Application Libraries
Category: C-Sharp

Nowadays it nearly impossible to avoid Microsoft's products. Therefore, it is always helpful to lear ...


Views: 252 Likes: 100
InvalidOperationException: No service for type Mic ...
Category: .Net 7

Question How do you solve for error that says "< ...


Views: 0 Likes: 45
How to optimize sql query in Microsoft SQL Server
Category: SQL

1. Keep in mind that when you write a Store Procedure SQL Server generates an SQL plan. If you ha ...


Views: 463 Likes: 102
Error 0xc0202009: Data Flow Task 1: SSIS Error Co ...
Category: SQL

Question How do you solve for this error?&nbsp; Error 0xc0202009 Data ...


Views: 0 Likes: 54
Unable to start Kestrel. System.InvalidOpera ...
Category: .Net 7

Question Unable to start Kestrel.&nbsp; &nbsp; &nbsp; System.InvalidOpera ...


Views: 250 Likes: 31
How to choose a Framework when making Web app
Category: Computer Programming

Choosing the right framework for develop ...


Views: 0 Likes: 29
ForcastingCatalog does not contain a defination fo ...
Category: .Net 7

Question I am working on a Machine Learning TimeSeries Prediction Engine and tried to use the SS ...


Views: 0 Likes: 34
Windows Command Line Docs
Category: Windows

This Microsoft Website will take you through <a href="https//docs.microsoft.com/en-us/windows-serve ...


Views: 316 Likes: 98
High-performance deep learning in Oracle Cloud with ONNX Runtime
High-performance deep learning in Oracle Cloud wit ...

This blog is co-authored by Fuheng Wu, Principal Machine Learning Tech Lead, Oracle Cloud AI Services, Oracle Inc.Enabling scenarios through the usage of Deep Neural Network (DNN) models is critical to our AI strategy at Oracle, and our Cloud AI Services team has built a solution to serve DNN models for customers in the healthcare sector. In this blog post, we'll share challenges our team faced, and how ONNX Runtime solves these as the backbone of success for high-performance inferencing.Challenge 1 Models from different training frameworksTo provide the best solutions for specific AI tasks, Oracle Cloud AI supports a variety of machine learning models trained from different frameworks, including PyTorch, TensorFlow, PaddlePaddle, and Scikit-learn. While each of these frameworks has its own built-in serving solutions, maintaining so many different serving frameworks would be a nightmare in practice. Therefore, one of our biggest priorities was to find a versatile unified serving solution to streamline maintenance.Challenge 2 High performance across diverse hardware ecosystemFor Oracle Cloud AI services, low latency and high accuracy are crucial for meeting customers' requirements. The DNN model servers are hosted in Oracle Cloud Compute clusters, and most of them are equipped with different CPUs (Intel, AMD, and ARM) and operating systems. We needed a solution that would run well on all the different Oracle compute shapes while remaining easy to maintain.Solution ONNX RuntimeIn our search for the best DNN inference engine to support our diverse models and perform well across our hardware portfolio, ONNX Runtime caught our eye and stood out from alternatives.ONNX Runtime is a high-performance, cross-platform accelerator for machine learning models. Because ONNX Runtime supports the Open Neural Network Exchange (ONNX), models trained from different frameworks can be converted to the ONNX format and run on all platforms supported by ONNX Runtime. This makes it easy to deploy machine learning models across different environments, including cloud, edge, and mobile devices. ONNX Runtime supports all the Oracle Cloud compute shapes including VM.Standard.A1.Flex (ARM CPU), VM.Standard.3/E3/4.Flex (AMD and Intel CPU), and VM.Optimized3.Flex (Intel CPU). Not only does ONNX Runtime run on a variety of hardware, but its execution provider interface also allows it to efficiently utilize accelerators specific to each hardware.Validating ONNX RuntimeBased on our evaluation, we were optimistic about using ONNX Runtime as our model inferencing solution, and the next step was to verify its compatibility and performance to ensure it could meet our targets.It was relatively easy to verify hardware, operating system, and model compatibility by just launching the model servers with ONNX Runtime in the cloud. To systematically measure and compare ONNX Runtime's performance and accuracy to alternative solutions, we developed a pipeline system. ONNX Runtime's extensibility simplified the benchmarking process, as it allowed us to seamlessly integrate other inference engines by compiling them as different execution providers (EP) for ONNX Runtime. Thus, ONNX Runtime served not only as a runtime engine but as a platform where we could support many inference engines and choose the best one to suit our needs at runtime.We compiled TVM, OneDNN, and OpenVINO into ONNX Runtime, and it was very convenient to switch between these different inference engines with a unified programming interface. For example, in Oracle's VM.Optimized3.Flex and BM.Optimized 3.36 compute instances, where the Intel(R) Xeon(R) Gold 6354 CPU is available, OpenVINO could run faster than other inference engines by a large margin due to the AVX VNNI instruction set support. We didn't want to change our model serving code to fit different serving engines, and ONNX Runtime's EP feature conveniently allowed us to write the code once and run it with different inference engines.Benchmarking ONNX Runtime with alternative inference enginesWith our pipeline configured to test all relevant inference engines, we began the benchmarking process for different models and environments. In our tests, ONNX Runtime was the clear winner against alternatives by a big margin, measuring 30 to 300 percent faster than the original PyTorch inference engine regardless of whether just-in-time (JIT) was enabled.ONNX Runtime on CPU was also the best solution compared to DNN compilers like TVM, OneDNN (formerly known as Intel MKL-DNN), and MLIR. OneDNN was the closest to ONNX Runtime, but still 20 to 80 percent slower in most cases. MLIR was not as mature as ONNX Runtime two years ago, and the conclusion still holds at the time of this writing. It doesn't support dynamic input shape models and only supports limited ONNX operators. TVM also performed well in static shapes model inference, but for accuracy consideration, most of our models use dynamic shape input and TVM raised exceptions for our models. Even with static shape models, we found TVM to be slower than ONNX Runtime.We investigated the reason for ONNX Runtime's strong performance and found ONNX Runtime to be extremely optimized for CPU servers. All the core algorithms, such as the crucial 2D convolution, transpose convolution, and pooling algorithm, are delicately written with assembly code by hand and statically compiled into the binary. It even won against TVM's autotuning without any extra preprocessing or tuning. OneDNN's JIT is designed to be flexible and extensible and can dynamically generate machine code for DNN primitives on the fly. However, it still lost to ONNX Runtime in our benchmark tests because ONNX Runtime statically compiled the primitives beforehand. Theoretically, there are several tunable parameters in the DNN primitives algorithms, so in some cases like edge devices with different register files and CPU cache sizes, there might be better algorithms or implementations with different choices of parameters. However, for the DNN models in Oracle Cloud Compute CPU clusters, ONNX Runtime is a match in heaven and is the fastest inference engine we have ever used.ConclusionWe really appreciate the ONNX Runtime team for open-sourcing this amazing software and continuously improving it. This enables Oracle Cloud AI Services to provide a performant DNN model serving solution to our customers and we hope that others will also find our experience helpful.Learn more about ONNX RuntimeONNX Runtime Tutorials.Video tutorials for ONNX Runtime.The post High-performance deep learning in Oracle Cloud with ONNX Runtime appeared first on Microsoft Open Source Blog.


What is ConfigureAwait(true) in Asp.Net Core 3.1 C ...
Category: .Net 7

When to Use ConfigureAwait() in Asp.net Core 3.1 Code<br ...


Views: 242 Likes: 95
The code execution cannot proceed because msodbcsq ...
Category: Other

Question The code execution cannot proceed because msodbcsql17.dll was not found. Reinstalling t ...


Views: 0 Likes: 14
Microsoft Unit Test does not discover a Unit Test ...
Category: Technology

Problem Creating a Unit Test for a Web API can be complicated sometimes, a lot of things could g ...


Views: 215 Likes: 82
Microsoft Unit Test Project Won't Run in C# Applic ...
Category: Technology

How to Set Up Microsoft UnitTest in Visual Studio 2019 to Test Asp.Net Core Application</ ...


Views: 264 Likes: 70
Towards debuggability and secure deployments of eBPF programs on Windows
Towards debuggability and secure deployments of eB ...

The eBPF for Windows runtime has introduced a new mode of operation, native code generation, which exists alongside the currently supported modes of operation for eBPF programs JIT (just-in-time compilation) and an interpreter, with the administrator able to select the mode when a program is loaded. The native code generation mode involves loading Windows drivers that contain signed eBPF programs. Due to the risks associated with having an interpreter in the kernel address space, it was decided to only enable it for non-production signed builds. The JIT mode supports the ability to dynamically generate code, write them into kernel pages, and finally set the permissions on the page from read/write to read/execute.Enter Windows Hyper-V hypervisor, a type 1 hypervisor, which has the Hypervisor-protected Code Integrity (HVCI) feature. It splits the kernel memory space into virtual trust levels (VTLs), with isolation enforced at the hardware level using virtualization extensions of the CPU. Most parts of the Windows' kernel and all drivers operate in VTL0, the lowest trusted level, with privileged operations being performed inside the Windows secure kernel operating in VTL1. During the boot process, the hypervisor verifies the integrity of the secure kernel using cryptographic signatures prior to launching it, after which the secure kernel verifies the cryptographic signature of each code page prior to enabling read/execute permissions on the page. The signatures are validated using keys obtained from X.509 certificates that chain up to a Microsoft trusted root certificate. The net effect of this policy is that if HVCI is enabled, it is no longer possible to inject dynamically generated code pages into the kernel, which prevents the use of JIT mode. Similarly, Windows uses cryptographic signatures to restrict what code can be executed in the kernel. In keeping with these principles, eBPF for Windows has introduced a new mode of execution that an administrator can choose to use that maintains the integrity of the kernel and provides the safety promises of eBPF native code generation. The process starts with the existing tool chains, whereby eBPF programs are compiled into eBPF bytecode and emitted as ELF object files. The examples below assume the eBPF-for-Windows NuGet package has been unpacked to c\ebpf and that the command is being executed from within a Developer Command Prompt for VS 2019. How to use native code generationHello_world.c// Copyright (c) Microsoft Corporation// SPDX-License-Identifier MIT#include "bpf_helpers.h"SEC("bind")intHelloWorld(){bpf_printk("Hello World!");return 0;}Compile to eBPF>clang -target bpf -O2 -Werror -Ic/ebpf/include -c hello_world.c -o out/hello_world.o>llvm-objdump -S out/hello_world.oeBPF bytecodeb7 01 00 00 72 6c 64 21 r1 = 56022949063 1a f8 ff 00 00 00 00 *(u32 *)(r10 - 8) = r118 01 00 00 48 65 6c 6c 00 00 00 00 6f 20 57 6f r1 = 8022916924116329800 ll7b 1a f0 ff 00 00 00 00 *(u64 *)(r10 - 16) = r1b7 01 00 00 00 00 00 00 r1 = 073 1a fc ff 00 00 00 00 *(u8 *)(r10 - 4) = r1bf a1 00 00 00 00 00 00 r1 = r1007 01 00 00 f0 ff ff ff r1 += -16b7 02 00 00 0d 00 00 00 r2 = 1385 00 00 00 0c 00 00 00 call 12b7 00 00 00 00 00 00 00 r0 = 095 00 00 00 00 00 00 00 exitThe next step involves a new tool introduced specifically to support this scenario bpf2c. This tool parses the supplied ELF file, extracting the list of maps and stored programs before handing off the byte code to the eBPF verifier, which proves that eBPF byte code is effectively sandboxed and constrained to terminate within a set number of instructions. The tool then performs a per-instruction translation of the eBPF byte code into the equivalent C statements and emits skeleton code used to perform relocation operations at run time. For convenience, the NuGet package also contains a PowerShell script that invokes bpf2c and then uses MSBuild to produce the final Portable Executable (PE) image, (an image format used by Windows). As an aside, the process of generating the native image is decoupled from the process of developing the eBPF program, making it a deployment time decision rather than a development time one.> powershell c\ebpf\bin\Convert-BpfToNative.ps1 hello_world.oC\Users\user\hello_world\out>powershell c\ebpf\bin\Convert-BpfToNative.ps1 hello_world.oMicrosoft (R) Build Engine version 16.9.0+57a23d249 for .NET FrameworkCopyright (C) Microsoft Corporation. All rights reserved.Build started 5/17/2022 93843 AM.Project "C\Users\user\hello_world\out\hello_world.vcxproj" on node 1 (default targets).DriverBuildNotifications Building 'hello_world_km' with toolset 'WindowsKernelModeDriver10.0' and the 'Desktop' target platform. Using KMDF 1.15.<Lines removed for clarity>Done Building Project "C\Users\user\hello_world\out\hello_world.vcxproj" (default targets).Build succeeded. 0 Warning(s) 0 Error(s)Time Elapsed 000003.57> type hello_world_driver.c// Snip Removed boiler plate driver code and map setup.static uint64_tHelloWorld(void* context){ // Prologue uint64_t stack[(UBPF_STACK_SIZE + 7) / 8]; register uint64_t r0 = 0; register uint64_t r1 = 0; register uint64_t r2 = 0; register uint64_t r3 = 0; register uint64_t r4 = 0; register uint64_t r5 = 0; register uint64_t r10 = 0; r1 = (uintptr_t)context; r10 = (uintptr_t)((uint8_t*)stack + sizeof(stack)); // EBPF_OP_MOV64_IMM pc=0 dst=r1 src=r0 offset=0 imm=560229490 r1 = IMMEDIATE(560229490); // EBPF_OP_STXW pc=1 dst=r10 src=r1 offset=-8 imm=0 *(uint32_t*)(uintptr_t)(r10 + OFFSET(-8)) = (uint32_t)r1; // EBPF_OP_LDDW pc=2 dst=r1 src=r0 offset=0 imm=1819043144 r1 = (uint64_t)8022916924116329800; // EBPF_OP_STXDW pc=4 dst=r10 src=r1 offset=-16 imm=0 *(uint64_t*)(uintptr_t)(r10 + OFFSET(-16)) = (uint64_t)r1; // EBPF_OP_MOV64_IMM pc=5 dst=r1 src=r0 offset=0 imm=0 r1 = IMMEDIATE(0); // EBPF_OP_STXB pc=6 dst=r10 src=r1 offset=-4 imm=0 *(uint8_t*)(uintptr_t)(r10 + OFFSET(-4)) = (uint8_t)r1; // EBPF_OP_MOV64_REG pc=7 dst=r1 src=r10 offset=0 imm=0 r1 = r10; // EBPF_OP_ADD64_IMM pc=8 dst=r1 src=r0 offset=0 imm=-16 r1 += IMMEDIATE(-16); // EBPF_OP_MOV64_IMM pc=9 dst=r2 src=r0 offset=0 imm=13 r2 = IMMEDIATE(13); // EBPF_OP_CALL pc=10 dst=r0 src=r0 offset=0 imm=12 r0 = HelloWorld_helpers[0].address(r1, r2, r3, r4, r5); if ((HelloWorld_helpers[0].tail_call) && (r0 == 0)) return 0; // EBPF_OP_MOV64_IMM pc=11 dst=r0 src=r0 offset=0 imm=0 r0 = IMMEDIATE(0); // EBPF_OP_EXIT pc=12 dst=r0 src=r0 offset=0 imm=0 return r0;}As illustrated here each eBPF instruction is translated into an equivalent C statement, with eBPF registers being emulated using stack variables named R0 to R10.Lastly, the tool adds a set of boilerplate code that handles the interactions with the I/O Manager required to load the code into the Windows kernel, with the result being a single C file. The Convert-BpfToNative.ps1 script then invokes the normal Windows Driver Kit (WDK) tools to compile and link the eBPF program into its final PE image. Once the developer is ready to deploy their eBPF program in a production environment that has HVCI enabled, they will need to get their driver signed via the normal driver signing process. For a production workflow, one could imagine a service that consumes the ELF file (the eBPF byte code), securely verifies that it is safe, generates the native image, and signs it before publishing it for deployment. This could then be integrated into the existing developer workflows.The eBPF for Windows runtime has been enlightened to support these eBPF programs hosted in Windows drivers, resulting in a developer experience that closely mimics the behavior of eBPF programs that use JIT. The result is a pipeline that looks like thisThe net effect is to introduce a new statically sandboxed model for Windows Drivers, with the resulting driver being signed using standard Windows driver signing mechanisms. While this additional step does increase the time needed to deploy an eBPF program, some customers have determined that the tradeoff is justified by the ability to safely add eBPF programs to systems with HVCI enabled.Diagnostics and eBPF programsOne of the key pain points of developing eBPF programs is making sure they pass verification. The process of loading programs once they have been compiled, potentially on an entirely different system, gives rise to a subpar developer experience. As part of adding support for native code generation, eBPF for Windows has integrated the verification into the build pipeline, so that developers get build-time feedback when an eBPF program fails verification.Using a slightly more complex eBPF program as an example, the developer gets a build-time error when the program fails verificationeBPF C codeThis then points the developer to line 96 of the source code, where they can see that the start time variable could be NULL.As with all other instances of code, eBPF programs can have bugs. While the verifier can prove that code is safe, it is unable to prove code is correct. One approach that was pioneered by the Linux community is the use of logging built around the bpf_printk style macro, which permits developers to insert trace statements into their eBPF programs to aid diagnosability. To both maintain compatibility with the Linux eBPF ecosystem as well as being a useful mechanism, eBPF for Windows has adopted a similar approach. One of the key differences is how these events are implemented, with Linux using a file-based approach and Windows using Event Tracing for Windows (ETW). ETW has a long history within Windows and a rich ecosystem of tools that can be used to capture and process traces.A second useful tool that is now available to developers using native-code generation is the ability to perform source-level debugging of eBPF programs. If the eBPF program is compiled with BTF data, the bpf2c tool will translate this in addition to the instructions and emit the appropriate pragmas containing the original file name and line numbers (with plans to extend this to allow the debugger to show eBPF local variables in the future). These are then consumed by the Windows Developer Kit tools and encoded into the final driver and symbol files, which the debugger can use to perform source-level debugging. In addition, these same symbol files can then be used by profiling tools to determine hot spots within eBPF programs and areas where performance could be improved.Learn moreThe introduction of support for a native image generation enhances eBPF For Windows in three areasA new mode of execution permits eBPF programs to be deployed on previously unsupported systems.A mechanism for offline verification and signing of eBPF programs.The ability for developers to perform source-level debugging of their eBPF programs.While support will continue for the existing JIT mode, this change gives developers and administrators flexibility in how programs are deployed. Separating the process of native image generation from the development of the eBPF program places the decision on how to deploy an eBPF program in the hands of the administrator and unburdens the developer from deployment time concerns.The post Towards debuggability and secure deployments of eBPF programs on Windows appeared first on Microsoft Open Source Blog.


Microsoft Access Tutorials
Category: Databases

Microsoft Access T ...


Views: 356 Likes: 118
Performant on-device inferencing with ONNX Runtime
Performant on-device inferencing with ONNX Runtime

As machine learning usage continues to permeate across industries, we see broadening diversity in deployment targets, with companies choosing to run locally on-client versus cloud-based services for security, performance, and cost reasons. On-device machine learning model serving is a difficult task, especially given the limited bandwidth of early-stage startups. This guest post from the team at Pieces shares the problems and solutions evaluated for their on-device model serving stack and how ONNX Runtime serves as their backbone of success.Local-first machine learningPieces is a code snippet management tool that allows developers to save, search, and reuse their snippets without interrupting their workflow. The magic of Pieces is that it automatically enriches these snippets so that they're more useful to the developer after being stored in Pieces. A large part of this enrichment is driven by our machine learning models that provide programming language detection, concept tagging, semantic description, snippet clustering, optical character recognition, and much more. To enable full coverage of the developer workflow, we must run these models from the desktop, terminal, integrated development environment, browser, and team communication channels.Like many businesses, our first instinct was to serve these models as cloud endpoints; however, we realized this wouldn't suit our needs for a few reasons. First, in order to maintain a seamless developer workflow, our models must have low latency. The round trip to the server is lost time we can't afford. Second, our users are frequently working with proprietary code, so privacy is a primary concern. Sending this data over the wire would expose it to potential attacks. Finally, hosting models on performant cloud machines can be very expensive and is an unnecessary cost in our opinion. We firmly believe that advances in modern personal hardware can be taken advantage of to rival or even improve upon the performance of models on virtual machines. Therefore, we needed an on-device model serving platform that would provide us with these benefits while still giving our machine learning engineers the flexibility that cloud serving offers. After some trial and error, ONNX Runtime emerged as the clear winner.Our ideal machine learning runtimeWhen we set out to find the backbone of our machine learning serving system, we were looking for the following qualitiesEasy implementationIt should fit seamlessly into our stack and require minimal custom code to implement and maintain. Our application is built in Flutter, so the runtime would ideally work natively in the Dart language so that our non-machine learning engineers could confidently interact with the API.BalancedAs I mentioned above, performance is key to our success, so we need a runtime that can spin up and perform inference lightning fast. On the other hand, we also need useful tools to optimize model performance, ease model format conversion, and generally facilitate the machine learning engineering processes.Model coverageIt should support the vast majority of machine learning model operators and architectures, especially cutting-edge models, such as those in the transformer family.TensorFlow LiteOur initial research revealed three potential options TensorFlow Lite, TorchServe, and ONNX Runtime. TensorFlow Lite was our top pick because of how easy it would be to implement. We found an open source Dart package which provided Dart bindings to the TensorFlow Lite C API out-of-the-box. This allowed us to simply import the package and immediately have access to machine learning models in our application without worrying about the lower-level details in C and C++.The tiny runtime offered great performance and worked very well for the initial models we tested in production. However, we quickly ran into a huge blocker converting other model formats to TensorFlow Lite is a pain. Our first realization of this limitation came when we tried and failed to convert a simple PyTorch LSTM to TensorFlow Lite. This spurred further research into how else we might be limited. We found that many of the models we planned to work on in the future would have to be trained in TensorFlow or Keras because of conversion issues. This was problematic because we've found that there's not a one-size-fits-all machine learning framework. Some are better suited for certain tasks, and our machine learning engineers differ in preference and skill level for each of these frameworksunfortunately, we tend to favor PyTorch over TensorFlow.This issue was then compounded by the fact that TensorFlow Lite only supports a subset of the machine learning operators available in TensorFlow and Kerasimportantly, it lags in more cutting-edge operators that are required in new, high-performance architectures. This was the final straw for us with TensorFlow Lite. We were looking to implement a fairly standard transformer-based model that we'd trained in TensorFlow and found that the conversion was impossible. To take advantage of the leaps and bounds made in large language models, we needed a more flexible runtime.TorchServeHaving learned our lesson on locking ourselves into a specific training framework, we opted to skip testing out TorchServe so that we would not run into the same conversion issues.ONNX Runtime saves the dayLike TensorFlow Lite, ONNX Runtime gave us a lightweight runtime that focused on performance, but where it really stood out was the model coverage. Being built around the ONNX format, which was created to solve interoperability between machine learning tools, it allowed our machine learning engineers to choose the framework that works best for them and the task at hand and have confidence that they would be able to convert their model to ONNX in the end. This flexibility brought more fluidity to our research and development process and reduced the time spent preparing new models for release.Another large benefit of ONNX Runtime for us is a standardized model optimization pipeline, truly becoming the “balanced” tool we were looking for. By serving models in a single format, we're able to iterate through a fixed set of known optimizations until we find the desired speed, size, and accuracy tradeoff for each model. Specifically, for each of our ONNX models, the last step before production is to apply different levels of ONNX Runtime graph optimizations and linear quantization. The ease of this process is a quick win for us every time.Speaking of feature-richness, a final reason that we chose ONNX Runtime was that the baseline performance was good but there were many options we could implement down the road to improve performance. Due to the way we currently build our app, we have been limited to the vanilla CPU builds of ONNX Runtime. However, an upcoming modification to our infrastructure will allow us to utilize execution providers to serve optimized versions of ONNX Runtime based on a user's CPU and GPU architecture. We also plan to implement dynamic thread management as well as IOBinding for GPU-enabled devices.Production workflowNow that we've covered our reasoning for choosing ONNX Runtime, we'll do a brief technical walkthrough of how we utilize ONNX Runtime to facilitate model deployment.Model conversionAfter we've finished training a new model, our first step towards deployment is getting that model into an ONNX format. The specific conversion approach depends on the framework used to train the model. We have successfully used the conversion tools supplied by HuggingFace, PyTorch, and TensorFlow.Some model formats are not supported by these conversion tools, but luckily ONNX Runtime has its own internal conversion utilities. We recently used these tools to implement a T5 transformer model for code description generation. The HuggingFace model uses a BeamSearch node for text generation that we were only able to convert to ONNX using ONNX Runtime's convert generation.py tool, which is included in their transformer utilities.ONNX model optimizationOur first optimization step is running the ONNX model through all ONNX Runtime optimizations, using GraphOptimizationLevel.ORT_ENABLE_ALL, to reduce model size and startup time. We perform all these optimizations offline so that our ONNX Runtime binary doesn't have to perform them on startup. We are able to consistently reduce model size and latency very easily with this utility.Our second optimization step is quantization. Again, ONNX Runtime provides an excellent utility for this. We've used both quantize_dynamic() and quantize_static() in production, depending on our desired balance of speed and accuracy for a specific model.InferenceOnce we have an optimized ONNX model, it's ready to be put into production. We've created a thin wrapper around the ONNX Runtime C++ API which allows us to spin up an instance of an inference session given an arbitrary ONNX model. We based this wrapper on the onnxruntime-inference-examples repository. After developing this simple wrapper binary, we were able to quickly get native Dart support using the Dart FFI (Foreign Function Interface) to create Dart bindings for our C++ API. This reduces the friction between teams at Pieces by allowing our Dart software engineers to easily inject our machine learning efforts into all of our services.ConclusionOn-device machine learning requires a tool that is performant yet allows you to take full advantage of the current state-of-the-art machine learning models. ONNX Runtime gracefully meets both needs, not to mention the incredibly helpful ONNX Runtime engineers on GitHub that are always willing to assist and are constantly pushing ONNX Runtime forward to keep up with the latest trends in machine learning. It's for these reasons that we at Pieces confidently rest our entire machine learning architecture on its shoulders.Learn more about ONNX RuntimeONNX Runtime Tutorials.Video tutorials for ONNX Runtime.The post Performant on-device inferencing with ONNX Runtime appeared first on Microsoft Open Source Blog.


InvalidOperationException: The 'Microsoft.AspNetCo ...
Category: Questions

Question How do you solve "InvalidOperationException The 'Microsoft-AspNetCor ...


Views: 495 Likes: 64
Microsoft Machine Learning mlContext Forecasting d ...
Category: Machine Learning

Question Microsoft Machine Learning mlContext Forecasting does not have ForecastBySsa DotNet Cor ...


Views: 1122 Likes: 87
Microsoft.Common.CurrentVersion.targets(4678,5): e ...
Category: .Net 7

Question How do you resolve the error "Severity Code De ...


Views: 0 Likes: 34
Login failed for user . (Microsoft SQL, Error: 184 ...
Category: SQL

Problem When you are trying to login into SQL Server with a ne ...


Views: 373 Likes: 108
SignalR Error in Dot Net Core 3.1 (.Net Core 3.1) ...
Category: .Net 7

Problems when implementing SignalR in Dot Net Core 3.1 (.Net Core 3.1) Error Failed to invoke 'H ...


Views: 2201 Likes: 100
Learn C-Sharp Programming Language
Category: C-Sharp

C# Program ...


Views: 375 Likes: 100
[91 Error Access] Solve 91 Error problem when usin ...
Category: Databases

When using Microsoft Access Database, sometimes it throws an error called 91 <strong style="margin ...


Views: 307 Likes: 76
Unhandled exception. System.DllNotFoundException: ...
Category: Network

Problem Unhandled exception. System.DllNotFoundException Unable to load shared ...


Views: 1207 Likes: 99
[Access,Excel Linked Table Problem] Solved!!
Category: Technology

<span style="font-size large; text-decoration-line underline; fon ...


Views: 280 Likes: 86
Announcing the availability of Feathr 1.0
Announcing the availability of Feathr 1.0

This blog is co-authored by Edwin Cheung, Principal Software Engineering Manager and Xiaoyong Zhu, Principal Data Scientist.Feathr is an enterprise scale feature store, which facilitates the creation, engineering, and usage of machine learning features in production. It has been used by many organizations as an online/offline store, as well as for real-time streaming.Today, we are excited to announce the much-anticipated availability of the OSS Feathr 1.0. It contains many new features and enhancements since Feathr became open-source one year ago. Similar to the online transformation, rapid sandbox environment, MLOPs V2 accelerator integration really accelerates the development and deployment of machine learning projects at enterprise scale.Online transformation via domain specific language (DSL)In various machine learning scenarios, features generation is required for both training and inferences. There is a limitation where data source cannot come from online service, as currently transformation only happens before feature data is published to the online store and the transformation is required close to real-time. In such cases, there is a need for a mechanism where the user has the ability to run transformation on the inference data dynamically before inferencing via the model. The new online transformation via DSL feature addresses these challenges by using a custom transformation engine that can process transformation requests and responses close to real-time on demand. It allows definition of transformation logic declaratively using DSL syntax which is based on EBNF. It also provides extensibility, where there is a need to define custom complex transformation, by supporting user defined function (UDF) written in Python or Java.nyc_taxi_demo(pu_loc_id as int, do_loc_id as int, pu_time as string, do_time as string, trip_distance as double, fare_amount as double) project duration_second = (to_unix_timestamp(do_time, "%Y/%-m/%-d %-H%-M") - to_unix_timestamp(pu_time, "%Y/%-m/%-d %-H%-M"))| project speed_mph = trip_distance * 3600 / duration_second;This declarative logic runs in a new high-performance DSL engine. We provide HELM Chart to deploy this service in a container-based technology such as the Azure Kubernetes Service (AKS). The transformation engine can also run as a standalone executable, which is a HTTP server that can be used to transform data for testing purposes. feathrfeaturestore/feathrpiperlatest.curl -s -H"content-typeapplication/json" http//localhost8000/process -d'{"requests" [{"pipeline" "nyc_taxi_demo_3_local_compute","data" {"pu_loc_id" 41,"do_loc_id" 57,"pu_time" "2020/4/1 041","do_time" "2020/4/1 056","trip_distance" 6.79,"fare_amount" 21.0}}]}' It also provides the ability to auto-generate the DSL file if there are already predefined feature transformations, which have been created for the offline-transformation. Online transformation performance benchmarkIt is imperative that online transformation performs close to real-time and meets low latency demand with high queries per second (QPS) transformation for many of the enterprise customers’ needs. To determine the performance, we have conducted a benchmark on three tests. First, deployment on AKS with traffic going through ingress controller. Second, traffic going through AKS internal load balance, and finally, via the localhost.  Benchmark ATraffic going through ingress controller (AKS)Infrastructure setupTest agent runs on 1 pod on node with size Standard_D8ds_v5Transform function deployed as docker image running on 1 pod on a different node with size Standard_D8ds_v5 in same AKS.Agent sends request thru service hostname which means traffic will go thru ingress controller.Test command ab -k -c {concurrency_count} -n 1000000 (http//feathr-online.trafficmanager.net/healthz)Benchmark A resultTotal RequestsConcurrencyp90p95p99request/sec1000000100349437101000000200681543685100000030010111843378100000040013152143220100000050016192442406Benchmark BTraffic goes thru AKS internal load balancer (AKS)Benchmark BInfrastructure setupTest agent runs on 1 pod on node with size Standard_D8ds_v5Transform function deployed as docker image running on 1 pod on a different node with size Standard_D8ds_v5 in same AKS.Agent sends request thru service pip which means traffic will go thru internal load balancer.Test command ab -k -c {concurrency_count} -n 1000000 ab -k -c 100 -n 1000000 http//10.0.187.2/healthzBenchmark B resultTotal RequestsConcurrencyp90p95p99request/sec10000001003444767310000002005784703510000003009101246613100000040011121545362100000050014151944941 Benchmark CTraffic going through local host (AKS)Infrastructure setupTest agent runs on 1 pod on node with size Standard_D8ds_v5.Transform function deployed as docker image running on the same pod.Agent sends request thru localhost which means there are not network traffic at all.Test command ab -k -c {concurrency_count} -n 1000000 (http//localhost/healthz)Benchmark C resultTotal RequestsConcurrencyp90p95p99Request/sec1000000100223594661000000200445594331000000300668601841000000400891059622100000050010111459031Benchmark summaryIf transform service and up-streaming are in same host/pod, the p95 latency result is very good, stay within 10ms if concurrency < 500.If transform service and up-streaming are in different host/pod, the p95 latency result might get reduced with 2-4ms, if traffic goes thru internal load balance.If transform service and up-streaming are in different host/pod, the p95 latency result might get reduced with 2-8ms, if traffic goes thru ingress controller.Benchmark thanks to Blair Chan and Chen Xu.For more details, check out the online transformation guide.Getting started with sandbox environmentThis is an exciting feature, especially for data scientists, who may not have the necessary infrastructure background or know how to deploy the infrastructure in the cloud. The sandbox is a fully-featured, quick-start Feathr environment that enables organizations to rapidly prototype various capabilities of Feathr without the burden of full-scale infrastructure deployment. It is designed to make it easier for users to get started quickly, validate feature definitions and new ideas, and interactive experience.By default, it comes with a Jupyter notebook environment to interact with the Feathr platform. Users can also use the user experience (UX) to visualize the features, lineage, and other capabilities.To get started, check out the quick start guide to local sandbox.Feathr with MlOps V2 acceleratorMLOps V2 solution accelerator provides a modular end-to-end approach to MLOps in Azure based on pattern architecture. We are pleased to announce an initial integration of Feathr to the classical pattern that enables Terraform-based infrastructure deployment as part of the infrastructure provisioning with Azure machine learning (AML) workspace. With this integration, enterprise customers can use the templates to customize their continuous integration and continuous delivery (CI/CD) workflows to run end-to-end MlOps in their organization. Check out the Feathr integration with MLOps V2 deployment guide.Feathr GUI enhancementWe have added a number of enhancements to the graphical user interface (GUI) to improve the usability. These include support for registering features, support for deleting features, support for displaying version, and quick access to lineage via the top menu. Try out our demo UX on our live demo site.What's nextThe Feathr journey has just begun, this is the first stop to many great things to come. So, stay tuned for many enterprise enhancements, security, monitoring, and compliance features with a more enriched MLOps experience. Check out how you can also contribute to this great project, and if you have not already, you can join our slack channel here.The post Announcing the availability of Feathr 1.0 appeared first on Microsoft Open Source Blog.


How to install DotNet 8 runtime on Linux Ubuntu
Category: .NET 7

Question How do I install dotnet core 8 runtime on linux ubuntu?Answer Follow the steps ...


Views: 0 Likes: 22
Excel Programming VB
Category: Technology

Sometimes it is important to learn to programme in Microsoft Office products. <a href="http//www.ex ...


Views: 343 Likes: 101
PERMANENT ROLE | WEB APPLICATION DEVELOPER | MOREL ...
Category: Jobs

<span style="font-weight bold; tex ...


Views: 300 Likes: 94
Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT
Faster inference for PyTorch models with OpenVINO ...

Deep learning models are everywhere without us even realizing it. The number of AI use cases have been increasing exponentially with the rapid development of new algorithms, cheaper compute, and greater access to data. Almost every industry has deep learning applications, from healthcare to education to manufacturing, construction, and beyond. Many developers opt to use popular AI Frameworks like PyTorch, which simplifies the process of analyzing predictions, training models, leveraging data, and refining future results.PyTorch on AzureGet an enterprise-ready PyTorch experience in the cloud.Learn morePyTorch is a machine learning framework used for applications such as computer vision and natural language processing, originally developed by Meta AI and now a part of the Linux Foundation umbrella, under the name of PyTorch Foundation. PyTorch has a powerful, TorchScript-based implementation that transforms the model from eager to graph mode for deployment scenarios.One of the biggest challenges PyTorch developers face in their deep learning projects is model optimization and performance. Oftentimes, the question arises How can I improve the performance of my PyTorch models? As you might have read in our previous blog, Intel and Microsoft have joined hands to tackle this problem with OpenVINO Integration with Torch-ORT. Initially, Microsoft had released Torch-ORT, which focused on accelerating PyTorch model training using ONNX Runtime. Recently, this capability was extended to accelerate PyTorch model inferencing by using the OpenVINO toolkit on Intel central processing unit (CPU), graphical processing unit (GPU), and video processing unit (VPU) with just two lines of code.Figure 1 OpenVINO Integration with Torch-ORT Application Flow. This figure shows how OpenVINO Integration with Torch-ORT can be used in a Computer Vision Application.By adding just two lines of code, we achieved 2.15 times faster inference for PyTorch Inception V3 model on an 11th Gen Intel Core i7 processor1. In addition to Inception V3, we also see performance gains for many popular PyTorch models such as ResNet50, Roberta-Base, and more. Currently, OpenVINO Integration with Torch-ORT supports over 120 PyTorch models from popular model zoo's, like Torchvision and Hugging Face.Figure 2 FP32 Model Performance of OpenVINO Integration with Torch-ORT as compared to PyTorch. This chart shows average inference latency (in milliseconds) for 100 runs after 15 warm-up iterations on an 11th Gen Intel(R) Core (TM) i7-1185G7 @ 3.00GHz.FeaturesOpenVINO Integration with Torch-ORT introduces the following featuresInline conversion of static/dynamic input shape modelsGraph partitioningSupport for INT8 modelsDockerfiles/Docker ContainersInline conversion of static/dynamic input shape modelsOpenVINO Integration with Torch-ORT performs inferencing of PyTorch models by converting these models to ONNX inline and subsequently performing inference with OpenVINO Execution Provider. Currently, both static and dynamic input shape models are supported with OpenVINO Integration with Torch-ORT. You also have the ability to save the inline exported ONNX model using the DebugOptions API.Graph partitioningOpenVINO Integration with Torch-ORT supports many PyTorch models by leveraging the existing graph partitioning feature from ONNX Runtime. With this feature, the input model graph is divided into subgraphs depending on the operators supported by OpenVINO and the OpenVINO-compatible subgraphs run using OpenVINO Execution Provider and unsupported operators fall back to MLAS CPU Execution Provider.Support for INT8 modelsOpenVINO Integration with Torch-ORT extends the support for lower precision inference through post-training quantization (PTQ) technique. Using PTQ, developers can quantize their PyTorch models with Neural Network Compression Framework (NNCF) and then run inferencing with OpenVINO Integration with Torch-ORT. Note Currently, our INT8 model support is in the early stages, only including ResNet50 and MobileNetv2. We are continuously expanding our INT8 model coverage.Docker ContainersYou can now use OpenVINO Integration with Torch-ORT on Mac OS and Windows OS through Docker. Pre-built Docker images are readily available on Docker Hub for your convenience. With a simple docker pull, you will now be able to unleash the key to accelerating performance of PyTorch models. To build the docker image yourself, you can also find dockerfiles readily available on Github.Customer storyRoboflowRoboflow empowers ISVs to build their own computer vision applications and enables hundreds of thousands of developers with a rich catalog of services, models, and frameworks to further optimize their AI workloads on a variety of different Intel hardware. An easy-to-use developer toolkit to accelerate models, properly integrated with AI frameworks, such as OpenVINO integration with Torch-ORT provides the best of both worldsan increase in inference speed as well as the ability to reuse already created AI application code with minimal changes. The Roboflow team has showcased a case study that demonstrates performance gains with OpenVINO Integration with Torch-ORT as compared to Native PyTorch for YOLOv7 model on Intel CPU. The Roboflow team is continuing to actively test OpenVINO integration with Torch-ORT with the goal of enabling PyTorch developers in the Roboflow Community.Try it outTry out OpenVINO Integration with Torch-ORT through a collection of Jupyter Notebooks. Through these sample tutorials, you will see how to install OpenVINO Integration with Torch-ORT and accelerate performance for PyTorch models with just two additional lines of code. Stay in the PyTorch framework and leverage OpenVINO optimizationsit doesn't get much easier than this.Learn moreHere is a list of resources to help you learn moreGithub RepositorySample NotebooksSupported ModelsUsage GuidePyTorch on AzureNotes1Framework configuration ONNXRuntime 1.13.1Application configuration torch_ort_infer 1.13.1, python timeit module for timing inference of modelsInput Classification models torch.Tensor; NLP models Masked sentence; OD model .jpg imageApplication Metric Average Inference latency for 100 iterations calculated after 15 warmup iterationsPlatform Tiger LakeNumber of Nodes 1 Numa NodeNumber of Sockets 1CPU or Accelerator 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHzCores/socket, Threads/socket or EU/socket 4, 2 Threads/Coreucode 0xa4HT EnabledTurbo EnabledBIOS Version TNTGLV57.9026.2020.0916.1340System DDR Mem Config slots / cap / run-speed 2/32 GB/2667 MT/sTotal Memory/Node (DDR+DCPMM) 64GBStorage boot Sabrent Rocket 4.0 500GB – size 465.8GOS Ubuntu 20.04.4 LTSKernel 5.15.0-1010-intel-iotgNotices and disclaimersPerformance varies by use, configuration, and other factors. Learn more at www.Intel.com/PerformanceIndex.Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.Your costs and results may vary.Intel technologies may require enabled hardware, software, or service activation.Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from a course of performance, course of dealing, or usage in trade.Results have been estimated or simulated. Intel Corporation. Intel, the Intel logo, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.The post Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT appeared first on Microsoft Open Source Blog.


How to Run DotNet Core 3.0 in Watch Mode without i ...
Category: .Net 7

Run DotNet Core 3.0 in Watch Mode without installing any&nbsp;extensions ...


Views: 673 Likes: 95


For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email AddressContact: [email protected]