There is one comment from that thread I’d like to highlight
- They’re using some form of dynamic modeling, and likely also current sensing that allows them to have a p-p excursion of 20 mm in a 4″ driver. This is completely unheard of in the home market. You can read an introduction to the topic here. The practical upshot is that that 4″ driver can go louder than larger drivers, and with significantly less distortion. It’s also stuff you typically find in speakers with five-figure price tags (The Beolab 90 does this, and I also suspect that the Kii Three does). It’s a quantum leap over what a typical passive speaker does, and you don’t really even find it in higher-end powered speakers
- The speaker uses six integrated beamforming microphones to probe the room dimensions, and alter its output so it sounds its best wherever it is placed in the room. It’ll know how large the room is, and where in the room it is placed.
- The room correction applied after probing its own position isn’t simplistic DSP of frequency response, as the speaker has seven drivers that are used to create a beamforming speaker array,. so they can direct specific sound in specific directions. The only other speakers that do this is the Beolab 90, and Lexicon SL-1. The Beolab 90 is $85,000/pair, and no price tag is set for the Lexicon, but the expectation in the industry is “astronomical”.
Lots of people online are calling it overpriced because they think Apple just slapped a bunch of speakers in a circular configuration and added Siri, but the engineering behind it is extremely audiophile niche stuff. And it does this all automatically with no acoustical set up or technical know how. And even if you are obsessive about your existing tuned audio set up, just think of how much better enthusiast stuff will become once this kind of technology becomes the accepted mainstream baseline for speakers.
Archive for Shawn
- Basic identity information such as name, address and ID numbers
- Web data such as location, IP address, cookie data and RFID tags
- Health and genetic data
- Biometric data
- Racial or ethnic data
- Political opinions
- Sexual orientation
The GDPR requirements will force U.S. companies to change the way they process, store, and protect customers’ personal data. For example, companies will be allowed to store and process personal data only when the individual consents and for “no longer than is necessary for the purposes for which the personal data are processed.” Personal data must also be portable from one company to another, and companies must erase personal data upon request.
That last item is also known as the right to be forgotten. There are some exceptions. For example, GDPR does not supersede any legal requirement that an organization maintain certain data. This would include HIPAA health record requirements.
What could be a challenging requirement is that companies must report data breaches to supervisory authorities and individuals affected by a breach within 72 hours of when the breach was detected. Another requirement, performing impact assessments, is intended to help mitigate the risk of breaches by identifying vulnerabilities and how to address them.
For a more complete description of GDPR requirements, see “What are the GDPR requirements?”.
the F-Droid community has been working to provide only 100% verified Free Software, and to make apparent all forms of tracking, advertising, and “anti-features” commonly found in apps. F-Droid provides a complete app ecosystem where users are actively notified of tracking and advertising in the apps, and can make informed choices. We have achieved this through the work of many dedicated volunteers reviewing apps as they are submitted, and marking the things that they find.
By the design of social media, likely no one has noticed. The algorithm abhors a vacuum.
- Age of your Mac, iOS device and battery
- How often the battery was charged
- Your battery health (capacity in relation to the original capacity your battery had when it left the factory)
- and much more…
But where Shazam could really help Siri’s ears is with HomePod. Apple wants its new home speaker to “reinvent home music,” but if all it does is sound good, that’s hardly revolutionary. If Apple could leverage its Shazam acquisition to build some serious smarts into HomePod, it could be a difference maker. We will already be able to ask Siri to play things like the most popular song in 1986, but Shazam could amplify its knowledge considerably. It would be great to tap your AirPods and ask “Play the song that goes like this …” or “Play that Ed Sheeran song about Ireland.” Shazam might not be able to do that now, but the groundwork is certainly in place, particularly when paired with Apple’s own AI musical capabilities.
And it could go beyond simple song identification too. Apple could use Shazam to create personalized playlists right on HomePod, based on your listening habits and tastes. Apple Music already creates mixes that are pretty great, but Apple’s machine learning could use what it hears to create customized playlists for the time of day that only play in our homes. That alone could be a reason to spend $350 on a HomePod.
I know a lot of people turn off haptic feedback on their smartphone. That is because, I have now learned, essentially every Android smartphone has absolutely awful haptics. Your $930 Galaxy Note8 has haptic feedback that is, frankly, bad. So does every other Android phone. Yes, the difference is that clear after going to the iPhone X.
Apple’s Taptic Engine doesn’t just buzz – it clicks, it taps, it knocks. And it can do so with an incredible range of intensities and precision. If I had to analogize, it’s sort of like having used crappy $10 earbuds your entire life and then someone hands you a set of $300 open-back Sennheisers. You didn’t know your music could sound that much better until your ears heard it for themselves. The same thing applies with the Taptic Engine: you won’t get it if you haven’t used it.
Linux has long dominated the TOP500 list, powering the majority of the machines that make it. At last count, back in June, 99.6% (or 498) of the top 500 fastest supercomputers ran Linux,
But as of November 2017 that figure stands at a full 100%: the 500 most powerful supercomputers in the world now use Linux.
The majority of these machines aren’t running your average off-the-torrent desktop distribution, but a bespoke, highly customised, and specialised version of Linux. But a minority do run something more familiar:
- 5 supercomputers run Ubuntu
- 20 supercomputers run some form of RedHat Enterprise Linux (RHEL)
- 109 supercomputers run the RedHat affiliated CentOS
The world’s (current) fastest supercomputer is China’s Sunway TaihuLight, which is powered by a colossal 650,000+ CPUs. This beast of a machine, which runs a customised version of Linux called ‘Sunway RaiseOS’, has a processing speed of 93 petaflops — or the equivalent power of 2 million laptops working in unison.
I believe Face ID is slower at actual recognition than Touch ID, but it’s nearly impossible to notice due to the implementation. In the time it takes to move your finger to the Touch ID sensor, Face ID could have already unlocked your iPhone.
That’s the real Face ID revolution. Since you’re almost always looking at your phone while you’re using it, Face ID enables what I call “continuous authentication.”
In clause 5.1.2 (iii) of the developer guidelines, Apple writes:
Data gathered from the HomeKit API or from depth and/or facial mapping tools (e.g. ARKit, Camera APIs, or Photo APIs) may not be used for advertising or other use-based data mining, including by third parties.
It also forbids developers from using the iPhone X’s depth sensing module to try to create user profiles for the purpose of identifying and tracking anonymous users of the phone — writing in 5.1.2 (i):
You may not attempt, facilitate, or encourage others to identify anonymous users or reconstruct user profiles based on data collected from depth and/or facial mapping tools (e.g. ARKit, Camera APIs, or Photo APIs), or data that you say has been collected in an “anonymized,” “aggregated,” or otherwise non-identifiable way.
While another clause (2.5.13) in the policy requires developers not to use the TrueDepth camera system’s facial mapping capabilities for account authentication purposes.
Rather developers are required to stick to using the dedicated API Apple provides for interfacing with Face ID (and/or other iOS authentication mechanisms). So basically, devs can’t use the iPhone X’s sensor hardware to try and build their own version of ‘Face ID’ and deploy it on the iPhone X (as you’d expect).
They’re also barred from letting kids younger than 13 authenticate using facial recognition.
Apps using facial recognition for account authentication must use LocalAuthentication (and not ARKit or other facial recognition technology), and must use an alternate authentication method for users under 13 years old.