Samsung today announced a new member of the Galaxy family: the Galaxy A7. The company’s new Galaxy A7 handset comes with something we have never seen on a Galaxy phone before: a triple camera setup.
Samsung’s new Galaxy A7 has a triple camera setup on the back that lets you get much wider shots. The new 8MP ultra wide lens on the back of the phone can let you take 120-degree shots, with Samsung saying “the Galaxy A7 captures the world exactly as you see it for unrestricted wide-angle photos.” Along with the 8MP ultra wide lens, there’s the 24MP lens, combined with the 5MP depth lens that lets you take photos with the bokeh effect, much like the iPhone’s Portrait Mode. Like other Galaxy phones, you can control the depth of field for these photos on the A7, too.
There’s also a 24MP camera for selfies on the front of the device, with adjustable LED flash and Selfie Focus for bokeh effect selfies.
Apart from that, the Galaxy A7 packs a 6-inch Super AMOLED display, 4GB of RAM, an octa-core processor clocked at 2.2GHz, and a fingerprint scanner for unlocking the phone. The Galaxy A7 comes in blue, black, gold and pink, and it will be available in select European and Asian markets this Fall. Samsung expects to expand availability in the “near future.” Pricing details are yet to be confirmed.
By the way: Samsung is expected to announce another Galaxy device in October, so make sure to keep an eye out for that.
<blockquote><em><a href="#325848">In reply to decals42:</a></em></blockquote><p>You may be more knowledgeable on the subject than I am, but the key question is whether the hardware can deliver all the data without real-time participation of the software (and of course, sufficiently sophisticated HW can do anything that SW can do). If it can, than post "exposure" manipulation could do anything that could be done in real-time. We also don't have access to Google's , Samsung's, or Apple's source code (or full HW specs), so we can't draw a conclusion on their system performance from an "internal" perspective.</p><p><br></p><p>We can only evaluate the quality of the result and not the potential or lack of it without knowing all the implementation details.</p>
<blockquote><em><a href="#326113">In reply to MikeGalos:</a></em></blockquote><p>That's what I meant by "all the data". Any lossy compression scheme fails to deliver all the data. Of course any functions performed on the data whether performed by digital or analog techniques may also has the potential to eliminate data or distort the data set.</p><p><br></p><p>But my main point was that unless the intended image data capture required real-time software intervention to acquire (e.g. changing the zoom level dynamically while capturing the image), then it doesn't matter if the processing takes place in real-time or after all the data has been captured.</p><p><br></p><p>So "intense optimization between hardware and software" might not be necessary to achieve a particular effect.</p><p><br></p><p><br></p><p><br></p><p><br></p>
<p>Sure, but how many blades?</p>
<blockquote><em><a href="#326305">In reply to wright_is:</a></em></blockquote><p>And of course, just like a drawing program won't make one a good artist, a fancy camera won't make one a good photographer. IMO, the cameras on high-end smartphones are a "waste of machinery" for most people (including me).</p>