https://blog.tensorflow.org/2023/03/how-adobe-used-web-ml-with-tensorflowjs-to-enhance-photoshop-for-web.html
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi249S4GMDkKS0NOavl_GELmKFme5HISmcgPkdr_gELKiROOLaEDAQcn-QxW8clAcx4hZt86gQhsHTfJ0mVyekcDoaa9bNfNMwvq-h6RPzOyP1OxK1J07NyM6UgAqF_SedAH4BuImMTqgFtdlBVe1Dr5YcM0l8bYplC5rfO_d3QajgQbLhuUX7_VG1j/s1600/image1.gif
Guest post by Joseph Hsieh (Principal Scientist, Project Lead at Adobe), Devin Fernandez (Director of Product Management, Adobe), and Jason Mayes (Web ML Lead, Google)
Introduction
Photoshop Web Beta is a browser-based version of the popular desktop image editing software, Adobe Photoshop. This online tool offers a wide range of features and capabilities for editing, enhancing, and manipulating images, all through a web browser.
In this post, we will explore how Adobe plans to bring advanced ML features from desktop to web, such as the Object Selection tool. We will also look at how web-based machine learning in JavaScript can improve the performance and user experience of Photoshop Web Beta, and what we can expect in the future.
Challenge
Photoshop has recently been made available on the web through WebAssembly in our first attempt to port our tooling to the browser. However, to bring advanced ML features such as the Object Selection Tool to Photoshop Web Beta, it currently adopts a cloud inference solution for object selection tasks which requires the user to be online, and to send data to the cloud service to perform the machine learning task. This means the web app cannot run offline, user privacy is not preserved, and there is an added latency and monetary cost to each call to the cloud as we need to run those models on our own hardware.
When it comes to the Object Selection tool, relying on cloud inference can sometimes result in suboptimal performance due to network latency. To provide a better user experience, Adobe Photoshop Web Beta eliminates this latency by developing an on-device inference solution, resulting in faster predictions and a more responsive UI.
TensorFlow.js is an open-source machine learning library from Google aimed at JavaScript developers that’s able to run client side in the browser. It’s the most mature option for web ML with comprehensive WebGL and WebAssembly backend operators support, and in the future, there will also be an option for a WebGPU backend to be used within the browser for faster performance as new web standards evolve. Adobe has collaborated with Google to bring TensorFlow.js to Photoshop Web Beta and enable advanced tasks such as object selection using ML running in the browser, the details of the collaboration are explained below.
When we first started to convert to a web solution, we noticed that there were synchronization issues between WebAssembly (what our core ported Photoshop code was running in) and TensorFlow.js (for running the ML models in the browser). Essentially we needed to load and run the TensorFlow.js models synchronously instead of asynchronously to work with our WebAssembly port of Photoshop. One potential 3rd party solution was not an option due to its drawbacks – such as large code overhead size or unpredictable performance across devices. So, a new solution was required.
To tackle these challenges, first Google and Adobe collaborated to bring a proxying API to Emscripten – a 3rd party compiler toolchain that can compile to WebAssembly that uses LLVM to enable code written in C or C++ to run in browser and interact with JavaScript libraries. A Proxying API for Emscripten effectively resolves these issues that the 3rd party solution suffered and allows for seamless integration between Photoshop’s Web Assembly implementation and the TensorFlow.js ML model running.
Next, once communication between WebAssembly and TensorFlow.js was possible, Adobe ported key ML models such as the one used in object selection shown above to the TensorFlow.js format. The TensorFlow.js team aided in model optimization for such models by focusing on optimizing common ops models utilized such as the Conv2D operation to ensure the converted models ran as fast as possible in the browser.
With both cloud and on-device solutions now a possibility, Photoshop Web Beta can choose the optimal option for delivering the best user experience and deploy ML models accordingly. While on-device inference offers superior user interaction with low latency and privacy for frequently used tasks, not all ML models can run locally due to the limited memory per browser tab (currently around 4GB in Chrome). On the other hand, cloud inference can accommodate larger ML models for tasks where network latency may be acceptable, with the tradeoffs of less perceived privacy by the end user and the associated cost to host and execute such models on server side hardware.
Performance Improvement
Since the Google team has improved TensorFlow.js hardware execution performance via its various supported backends (WebGL, WASM, Web GPU), it has resulted in models seeing anywhere from 30% to 200% performance improvements (especially for the larger models that tend to see the biggest gains), enabling close to real time performance right in the browser.
Looking Ahead
Photoshop Web Beta’s Select Subject and Object Selection tools demonstrate how machine learning can help enhance user workflow and experience. As web-based machine learning technology continues to evolve and TensorFlow.js backend support and efficiency continue to make performance gains, Photoshop Web Beta will be able to bring more advanced models to the edge on device in the browser, pushing the limits of what is possible and enabling even more advanced features to delight users.
Try it out
Try out Photoshop Web Beta right now for yourself at https://photoshop.adobe.com and see the power of machine learning in the browser that brings the best of Web ML (coming soon) and Cloud ML inference in action!
Adobe offerings and trademarks belong to Adobe Inc and are not associated with Google.
Be the first to comment