Which is a complete non-issue. It’s $99 / year, basically a symbolic amount just high enough to prevent spammers from making a billion accounts.
Which is a complete non-issue. It’s $99 / year, basically a symbolic amount just high enough to prevent spammers from making a billion accounts.
I have no problems with this. Notarizing your app is trivial and takes just a few minutes. As a user I want to know who actually produced an app and ensure it wasn’t tampered with.
don’t let apple tell you they invented it.
Why always the knee-jerk anti-apple reaction even if they do something good?
FYI: Apple isn’t telling anyone they invented this. In fact, they didn’t even tell anyone about this feature and declined to comment after it was discovered and people started asking questions.
Yep. The best people will leave first because they have options. It’s called the dead sea effect
How is that even legal? How is someone who hasn’t examined the patient and isn’t their physician allowed to make treatment decisions? If they even have the necessary qualifications.
I mean, you have to explicitly give permission before apps can access the camera.
How can an app turn on the camera without your consent?
And yet, I’ve never run into RAM problems on iPhones, both as a user and as a developer. On iOS an app can use almost all the RAM if needed, as long as your app is running in the foreground. Android by contrast is much stingier with RAM, especially with Java/Kotlin apps. There are some hard limits on how much RAM you can actually use and it’s a small fractIon of the total amount. The actual limit is set by the manufacturer and differs per device, Android itself only guarantees a minimum of 16MB per app.
The reason is probably because Android is much more lenient with letting stuff run in the background so it needs to limit the per-app memory usage.
Those apps also use more RAM than an equivalent iOS app, simply because they run on a garbage-collected runtime. With a GC there is a trade-off between performance and memory usage. A GC always wastes memory, as memory isn’t freed immediately once no longer in use. It’s only freed when the GC runs. If you run it very often you waste little RAM at the cost of performance (all the CPU cycles used by the GC) if you run it at large intervals you waste a lot of RAM (because you let a lot of ‘garbage’ accumulate before cleaning it up). In general, to achieve similar performance to non-GC’d code you need to tune it so it uses about 4 times as much RAM. The actual overhead depends on how Google tuned the GC in ART combined with the behavior of specific apps.
Note that this only applies to apps running in ART, many system components like the web browser are written in C++ and don’t suffer from this inefficiency. But it does mean Android both uses more RAM than iOS while at the same time giving apps less RAM to actually use.
It basically comes down to different architectural choices made by Google and Apple.
It’s not hard to target the older models, with iOS it’s mostly just a few small tweaks.
It depends what you are doing. Targeting the iPhone 7’s GPU can be quite a PITA.
Upgrade your dinosaur of a phone.
Doesn’t matter either way because everyone uses WhatsApp anyway.
RCS will never be able to compete with either because it’s a GSMA standard. Apple or Meta can think of a cool new feature, add it to their client and roll it out to all their users with the next update.
If they want to add a new feature to RCS, the GSMA (An organization with over 1500 members) will have to form a committee, they can then talk about their conflicting interestes for a few years before writing down a new version of the standard, then dozens of clients and servers at hundreds of different operators need to be upgraded before everyone can use the new feature. Due to this bullshit RCS will never be able to keep up.
Not entirely true… the American Android users care about it;
Then I guess it’s nice for both of them that iOS will support RCS.
Literally no one cares about RCS.
Or just don’t buy LCD and get an OLED. All LCDs look terrible anyway. The technology is fundamentally unsuitable for making televisions.
I have the 2018 pro. The problem with lack of bitstreaming is that no matter what you adjust, you won’t get 3D audio (i.e. height speakers) out of the LPCM streams. You can do DD+Atmos on streaming services but you miss out on TrueHD+Atmos and DTS:X on bluray backups. That’s not something I want to give up.
If only Apple added support for bitstreaming I’d replace my Shield with an AppleTV in a heartbeat.
Don’t get me wrong, the Shield plays everything you throw at it. The hardware is great but the software is so janky. It’s often slow to respond to input, it needs a reboot every couple of days because it just gets more laggy and choppy over time. Sometimes it just forgets my TV supports Dolby Vision until I reboot it. Support from Nvidia seems to have dried up as well.
The whole point of 2FA is to keep the second factor separate from the first. If you store both in the same password manager app that defeats the entire point of 2FA.
I never claimed it was evidence of how it currently works, only that it gives some insight into how Reddit was designed. I would be very surprised if they changed this aspect of the design. It makes sense to not delete comments or edits for reasons I mentioned before. Unfortunately we won’t know for sure unless Reddit confirms it.
Reddit used to be open source. There is still a copy of that source available on github. It’s 7 years old so it’s probably significantly different from what they are running now. Still, it gives some insight into the design.
For example, deleted comments aren’t deleted, it just sets a deleted flag. Example code that shows this.
I haven’t dug around the code enough to figure out how editing works, it’s Python code so an unreadable mess. The database design also seems very strange. It’s like they built a database system on top of a database.
It’s $99 a year. I wish my hobbies were that cheap.