benchmark nodejs's C++ to Javascript data transfer

Need help with C, C++, perl, python, etc?

benchmark nodejs's C++ to Javascript data transfer

Postby MagicPoulp » 2019-03-22 09:07

nodejs (inside electron) allows to runs plugins in C++ and fetch the data in Javascript.

If you have data in C++ it needs to be copied into new structures in Javascript.
What would be the speed of data transfer? In BK/s? on an average computer.
https://nodejs.org/api/addons.html

I mean try to speculate what are the bottlenecks, without making a real measurement.

Is it still true that a pure C++ application is much faster?
MagicPoulp
 
Posts: 197
Joined: 2018-11-05 21:30

Re: benchmark nodejs's C++ to Javascript data transfer

Postby neuraleskimo » 2019-03-22 16:45

To answer your last question first, yes. C++ (and handful of other languages like C or Rust) will always be faster. BUT, does it matter for your application(s) or more importantly, for the performance-critical parts of your applications? For low-latency financial systems, scientific computing, etc., performance matters a lot. In those disciplines, entire applications will be written in C++ (or C, etc.). In other application areas, using Node.js (or Python) as the glue between "chunks" of C/C++ is perfectly fine.

On the general question(s), your use of "speculate" makes me wonder if this is a homework question (because I have asked such questions). I hope this is not. Anyway, C/C++ are my native languages (English is a second language) and Node.js is not a language I use; however, I skimmed the docs quickly... Setting aside the time to resolve, find, load, etc. the modules, most CPU cycles will be consumed with marshaling data between Node.js and C++ (true for any language). The most expensive of those will be marshaling strings (I saw conversions for UTFx). If Node.js uses UTF-8 internally, then the cost will be minimal. If Node.js uses UTF-16 and/or relies on libicu without an ASCII fast-path, that conversion will cost a little more for ASCII data (also JSON). Also, copying string data is not particularly expensive, but allocating memory to receive the copy is (relatively speaking). Floating point and integer conversions will probably be cheap because Node.js will likely use 64-bit integers and double-precision types (because it is likely written in C/C++ itself). Copying those values will be super cheap because they will all (most likely) be passed on the stack. Finally, passing arrays of values might incur some overhead if type conversion must be applied or data needs to be copied (and therefore memory allocated). Also, programmers in higher-level languages often use linked lists, etc., which are super expensive compared to arrays (e.g., std::vector). If you use a data structure that is "flattened" for you, then that will be very expensive.

There are other things to consider, but this covers the big things from my perspective.

Hope this helps...
User avatar
neuraleskimo
 
Posts: 102
Joined: 2019-03-12 23:26
Location: Bloomington, Indiana, USA

Re: benchmark nodejs's C++ to Javascript data transfer

Postby MagicPoulp » 2019-03-25 08:55

Interesting.

not a home work question.

A more precise question.
with images of 4KB coming via streaming, at 120Hz in a C++ add on, will an Electron or nodejs application be able to display it at the 120FPS framerate?

Javascript has garbadge collection and no multithreading, escpecially in the C++ add ons for portability.

Javscript can operate on the C++ structures directly, without copying.

I think it will require to convert the image stream into a video MPEG4 format, and then use a video HTML element.
https://www.w3schools.com/tags/tag_video.asp

Electron has like a browser and communicates with its server via http. There is a problem because a html element video requires a fixed url. So one will have to stream data via http. Which means a 4/3 data inflation due to base64 required by x-ndjson streaming format, and also network overhead.

But overall, it should work. I wonder what FPS one would get. How to calculate it?
MagicPoulp
 
Posts: 197
Joined: 2018-11-05 21:30

Re: benchmark nodejs's C++ to Javascript data transfer

Postby neuraleskimo » 2019-03-25 17:02

Ah, I see where you are going and the video portion is getting outside my area of expertise, but I can be more specific...

As long as there is no copying of data, the function call from Node.js to C++ will be very low overhead. When importing your C++ module, Node.js will simply load the module (e.g., dlopen()) and grab a long pointer to the function (e.g., dlsym()). When you call the function, Node.js should simply push any arguments on the stack (e.g., the address of the buffer to your image data). This will be only a few instructions. Node.js will then do an indirect jump. Again, a few instructions; however, here you have an opportunity for two cache misses. Those will be expensive, but rare if the "hot path" is short (i.e., small and stays in cache). Unfortunately, I can't answer that part for you: you will have to inspect the code or do some benchmarking. Without breaking out our x86 manuals, a good way to get a handle on the instructions involved is to use Compiler Explorer by Matt Godbolt (https://godbolt.org/). By the way, any talk from Matt Godbolt or Chandler Carruth is worth watching, but particularly watch their talks on compilers and performance. Also, if you don't already know, they are both active in the C++ community.

Garbage collection in Node.js will be a problem. Despite what Java programmers say, part of the advantage of C/C++ (and similar) is total control over garbage collection. Because you don't have control over that, Node.js will occasionally blow a huge hole in your data pipeline when the garbage collector runs. That will be completely unpredictable. However, it is possible to program in such a way (e.g., using lots of global buffers and variables) to minimize the amount of garbage that needs to be collected.

As for the rest of it, that will be tough. The amount of work required to transcode images is fairly substantial and depends greatly on a number of parameters. Also, the use of HTML, etc. makes the problem even harder. You are now talking about a huge chunk of code with lots of branches, lots of cache misses, etc.

Having said all of that, have you looked at Qt? If your app doesn't have to use electron (I happen to like the Atom text editor, but I wouldn't consider it high performance), then a C++/Qt solution might be worth a serious look.

Hope this helps...
Last edited by neuraleskimo on 2019-03-25 22:15, edited 1 time in total.
User avatar
neuraleskimo
 
Posts: 102
Joined: 2019-03-12 23:26
Location: Bloomington, Indiana, USA

Re: benchmark nodejs's C++ to Javascript data transfer

Postby neuraleskimo » 2019-03-25 18:56

By the way, if you are moving data off disk or over the network (very likely), compress it with a fast method (e.g., zlib or gzip). I work with large sets of data and the increase in performance can be dramatic.

The longer story (or explanation) is that disk (including SSD) and networks are relatively slow. Trading CPU cycles and memory allocations to compress, move, then decompress data is almost always worth the effort. For example, a data set that takes, say, five seconds to move uncompressed, will take 500 ms to move when compressed 10:1. On the datasets I use, that compression takes very little time. As always though, benchmark to be certain.

Moral of the story... You mentioned MPEG4. That may or may not be the best choice. You will have to be the judge. However, if you have uncompressed images (e.g., raw camera images). You will most certainly want to compress those before/after moving them.
User avatar
neuraleskimo
 
Posts: 102
Joined: 2019-03-12 23:26
Location: Bloomington, Indiana, USA

Re: benchmark nodejs's C++ to Javascript data transfer

Postby MagicPoulp » 2019-03-26 08:57

Thank you for your comments.

It is evident for me that know both C++ and Javascript that developing a user interface in Javascript/HTML/CSS is much faster than with C++ and a library like QT. Besides, QT has limited possibilities and the tab (and shift+tab) keyboard navigation does not work on certain elements. That is why I am interested to evaluate electron.

I have a new solution. Forget the streaming x-ndformat. It is for text in json format. ANd the transfer is unefficient. To maintain a persistent connection, a websocket would be better but it is a complex technology and would require to transfer images one by one on the 2 ends.

PLease understand that MPEG4 enables one HTTP request only. SEnding images one by one manually also has a overhead. ANd it requires streaming text format inflated 4/3 and unencryptable (x-ndjson) or use websockets taht are complex)

I have perfected my architecture.

The MPEG4 transfer can use a normal HTTP request. And the MPEG4 format includes compression in a very efficient way. The browser has c++ code built in to show videos.

So one complexity will be to build the MPEG4 stream in real time and make the backend deliver it. For all the other parts, it is straightforward standard use of the video element in HTML.

I think a standard double buffering would work. FIll in buffer1 while delivering buffer 2, once done reverse, fill buffer 2 while delivering buffer 1. A major drawback is that no multithreading is given in the default configuration. But it will work. One will just have to adjust the frame rate with what a normal computer can handle.

FOr SKype, they use UDP because the data transfer has less handshakes. HTTP is quite a heavy TCPbased protocol. But I don't think it is a bottleneck on the network. Electron is portable. Maybe windows will have more bottlenecks on the network.

Talking about compression, I am not sure it helps when sending data to localhost. THe compressing/uncompression has a CPU cost. ANd the network is very fast locally.

Skype uses electron. SO I guess they managed to make it work. THey probably use UDP and websockets.
MagicPoulp
 
Posts: 197
Joined: 2018-11-05 21:30


Return to Programming

Who is online

Users browsing this forum: No registered users and 2 guests

fashionable