I'm excited to introduce intern.work, a dedicated job board and learning hub created solely for internships.
While looking for interns to join my company, I realized there are very, very few places for internships alone so I decided to create on myself. I also noticed that most regular platforms emphasize big-ticket positions, such as "Remote Software Engineers", but overlook a crucial stepping stone for many in their career which are internships.
Why intern.work?
1. By dedicating a platform exclusively for internships, I aim to ensure that both companies and interns find exactly what they're looking for without the noise of other job types.
2. Not just a job board, but a holistic hub (hopefully) that guides prospective interns on making the most of their internships, from preparing for interviews to maximizing their time at the position.
3. Community Driven: A space for interns to share experiences, seek advice, and connect with potential mentors or peers in the industry (coming soon)
I would love to get feedback and feature suggestions.
Complex SVG Favicons will usually be larger in term of bytes than highly compressed PNG icons. We did an extensive study of Favicons served on the top 500 websites [1].
If you want to stick with the old-school PNG icons, we open-sourced `icopack` - our internal tool for efficient packing of individual PNGs into a highly-optimised ICO files [2].
> Complex SVG Favicons will usually be larger in term of bytes than highly compressed PNG icons.
The post you linked to doesn't include the word "SVG". Is there a separate article that compares using one SVG file vs. using multiple images?
It's hard to imagine that using a typical suite of PNG images at different sizes is going to be more efficient (and future-proof) than a single SVG file run through svgo.
> It's hard to imagine that using a typical suite of PNG images at different sizes is going to be more efficient (and future-proof) than a single SVG file run through svgo.
That's the wrong comparison though - each browser will typically only download one favicon size.
Earlier this year I created my own ICO editor for fun [1]. I learned a lot about reading and writing binary files, and decoding BMP data. While testing the editor on existing site favicons I kept finding that they were all uncompressed BMP data, and came to a similar conclusion to your article after checking the icons used on the Alexa top 100 sites.
I guess this is a combination of the ICO format being somewhat opaque (it's hard to tell if it's using BMP or PNG without using a hex editor), and that there aren't many applications available that create PNG-based ICO files in the first place (especially ones that are used in web development pipelines).
PNG support in .ico files was introduced in Windows Vista and I expect Internet Explorer just supports whatever the OS does. There might still be holdovers (both sites and tools) from when people cared about XP support.
This reminds me of an experiment [1] we run a couple months back. We crawled top 100 Alexa websites and check the bloat in the images served to billions of users.
It's a perceptually lossless optimization and recompression.
We use saliency detection (trained on an eye-tracker) which tells us where the human vision system would look in the images and optimise those fragments (heatmaps) using our own comparison metrics.
If you're interested in the details shoot me an email to przemek [at] optidash [dot] ai
A few days ago I've lanunched Optidash - ML-enhanced image optimization and processing API. Optidash builds on top of another product of mine (Pixaven) and runs entirely on Mac Pros (as pioneered by imgix).
While Optidash supports all major image formats for optimization and processing, I am mainly focused on JPEGs. All open-source JPEG optimizers share pretty much the same algo - create N copies of master image at different quality settings and, using various metrics (ssim/dssim/psnr), pick the variant with the "best" quality to size ratio.
Optidash takes a different approach. We use saliency detection to identify the most important area(s) of a master image. That basically tell us how the human eye would see the image and where it would look most likely. Once the saliency heatmap is computed, we crop that fragment and pass it to our Core ML model trained to predict optimal encoding settings. That approach also comes with a performance benefit - only the most salient areas are passed to the model (far less pixel data to process) and it also ensures we don't saturate pretty limited GPU memory we have available on Mac Pros (we use 2nd gen so D700, 6GB VRAM).
Estimating output Q value is one thing but we are also training additional models to help us determine optimal quantization table for a given salient region.
As I am still evaluating the above approach and general API design, I'd love to get some feedback.
Sure thing. Master images (user uploads) are deleted immediately after the processing is done (detection and cropping). Cropped faces are removed one hour after the upload so that you have time to download them back.
Yesterday I launched a simple, yet very effective tool for online face detection, cropping and filtering. The idea is very simple - upload as many images as you like and FaceMaze will give you back all the faces cropped from those input images. You can control padding, border radius and output image format.
FaceMaze rides on top of Pixaven's [1] infrastructure and, at its very core, uses Apple's Vison framework to do all the heavy lifting. The accuracy of this is not as flawless as for example Tencent's DSFD that will give you back even face reflections on flat surfaces but it's really good enough for everyday usage.
I am not aware of any other free and unlimited web interfaces for face detection hence the title of this Show HN post.
Core Image by itself is worth investing in a proper (Apple) hardware. Now that we can write custom Metal kernels and plug them straight into Core Image is even more beneficial. We can come up with any pixel modifications that will be executed within the GPU context. Precompiled kernels anyone? :)
The foundation of any image processing pipeline on MacOS/iOS is Image IO that offers crazy fast codecs for over a dozen different image formats. Even though I had to write extra integrations for WebP and animated GIFs it was really worth the effort. Native HEIC/HEIF support (reading and writing) is also neat.
Apple's CoreML is another piece of software I am using more and more at Pixaven. The ease of testing and deployment of new ML models is just amazing (and yes, I am learning a lot along the way).
What baffles me is why doesn't NVIDIA offer a cross platform CoreImage counterpart. They do have nvjpeg for decoding, they have video codecs built in (so presumably HEIF should be doable also), but there isn't much available, as far as I can tell, if you want very high quality image manipulation with NVIDIA GPUs. I get that they're focused on 3D and deep learning now, but this is even useful for deep learning: I'd love to have hardware image decode and my entire image augmentation pipeline on the GPU and only do IO and perhaps some gnarly loss and metric computations on the CPU. Some of this is doable with NVIDIA DALI, but it doesn't seem to offer enough of a perf advantage to bother with it so far.
What would be great is if they offered similar capabilities to CoreImage (that is, quality-focused, flexible image processing) that I could use everywhere I can use CUDA.
I second that. High performance image processing with NVIDIA means writing low level CUDA, something I am not willing to invest my time in (at least for now). Translating all the code and custom kernels I wrote for Core Image would be quite a hassle to put it mildly.
Browsing through this thread, the main argument seems to be that they are most comfortable with the Mac APIs and think it would be harder to rewrite their stack using different technology. They would have to re-apply the new stack onto millions of existing images and make sure the results 100% match the previous version, which is a problem that you don't have launching a new API.
So I'm not quite sure that I understand this reasoning in your case, as the operations performed (scaling, cropping, watermarking, flipping, filtering) are available in just about any image processing pipeline, and not really linked to anything that Quartz or Mac does particularly well.
I'm excited to introduce intern.work, a dedicated job board and learning hub created solely for internships.
While looking for interns to join my company, I realized there are very, very few places for internships alone so I decided to create on myself. I also noticed that most regular platforms emphasize big-ticket positions, such as "Remote Software Engineers", but overlook a crucial stepping stone for many in their career which are internships.
Why intern.work?
1. By dedicating a platform exclusively for internships, I aim to ensure that both companies and interns find exactly what they're looking for without the noise of other job types. 2. Not just a job board, but a holistic hub (hopefully) that guides prospective interns on making the most of their internships, from preparing for interviews to maximizing their time at the position. 3. Community Driven: A space for interns to share experiences, seek advice, and connect with potential mentors or peers in the industry (coming soon)
I would love to get feedback and feature suggestions.
Visit intern.work for more.