A couple of related issues that cause me frequent problems...
- In addition to detecting when monitors are removed, detecting when they are there from the start. If I have a monitor plugged in at boot, it is generally not detected. I instead have to wait for the login screen to appear before connecting the external monitor.
- When opening a new window, there seems little logic as to which monitor it will appear on, and where (particularly annoying in combination with being limited to a single DPI setting across multiple monitors despite each needing different scaling (sudden unexpectedly huge or tiny windows) - see https://hackernews.hn/item?id=14003160 )
Development Initiatives | Python Developer | Bristol UK | ONSITE http://devinit.org/
Development Initiatives is looking to recruit a talented Python developer to work for the International Aid Transparency Initiative. If you want a job that uses your tech skills to make the world a better place – this is something for you.
We’re looking for someone who is passionate about supporting the potential of open data to provide a platform for greater accountability and citizen empowerment in the world of aid and development.
The role is an exciting opportunity to make a real difference to the transparency and accountability of international aid and development, and to play an important role in a leading international open data initiative. In addition to your tech skills, your contributions to team discussions will help shape the future of this prestigious open data standard.
Out of several dozen different uses of laser pointers, I've only ever been able to (easily) see perhaps 3 or 4 of them, plus another small portion if focusing entirely on the game of spot-the-laser-pointer. It's related to my colourblindness.
As such, verbal cues (or a long stick) are preferable to a laser pointer. Of course, use a laser pointer to assist those who can see it, but it's better assume half the people in the room can't and use words to this effect.
I may be an odd one, but I use email for personal things. If messaging a group of people, there's inevitably someone who refuses to get Facebook or doesn't have a smartphone or never checks their texts or something. At the same time, everybody has an email address that's checked at least daily (at university, and email's how anything course-related is communicated).
By emailing, it removes the cognitive step of working out how to contact someone - it just works. Yes, I'll use text or Facebook for certain things with certain people, but email is something that just works with everyone.
University is a bit a odd case - it's often a mix between work and private life and during that time I indeed used email quite a bit more for private communication.
There are a few other things that could also impact results. Looking at the experimental design, there could be a good 25% uncertainty in the results because of how it's done at such a high level.
Rough up-to values for various things I can think of / see:
- 10% because they're using an OS rather than running binaries straight.
- 15% because GCC is odd and -O3 does even stranger things, particularly when it comes to energy.
- 15% because their benchmarks are large workloads rather than microbenchmarks that may better target the architecture rather than being huge lumps (would exacerbate GCC strangeness).
- 17% because they're measuring board rather than CPU power supply (the claim that SoC-based ARM development boards cannot have processor power isolated is questionable - I've seen Beagleboards with the CPU power supply isolated)
- 10% because they're measuring energy consumption at a low resolution (their equipment measures in Hz when there's kit that happily measures in kHz or MHz).
Of course, some of these will cancel, and others will be nowhere near as bad as stated. It also doesn't introduce order-of-magnitude changes to the conclusions, although a few of the 'Key Findings' may want questioning.
You can have an impact on power consumption at a range of levels. From choosing appropriate algorithms, to switching to different data types, to changing compiler flags. Add together a few of these 'easy' 5-10% energy savings and you've just reduced your application's energy consumption by a third (OK, it's not quite that simple, but the principle stands).
I currently have about 3 weeks experience with some different formal verification tools (others will know better), but...
1. While large, the system will have a finite number of desired actions (Read, Write, Send Data, etc), with set paths/transitions between them (think state machines). With each of these things-you-want to happen, there may or may not be corresponding errors. If there are error handling actions, these become further things-you-want-to-happen, and so on. This modelling can be at a fairly high level, so you only need to go 'there may be an error, this is how we may handle it' rather than iterating over every possible error.
2. Each part of a system may be independently modelled, with synchronisation states (waiting for data from another system, etc). A model checker can then generate traces to cover the entire possible set of traces. This means that verification may occur at multiple simultaneous levels - you could model a File System separately to a Network Driver, each of which are separate to an Application making use of these two things.
3. Full formal verification is NP-hard. Model checkers, however, are clever tools and are able to act more intelligently than brute forcing. They can do things like looking at which parts of the state space haven't been covered as thoroughly and covering them better; doing things akin to MC/DC testing; focusing on rare events (a good checker will allow you to customise the types of traces you're interested in). Checkers are also able to output what they have covered along with probabilities of finding certain types of bug. Standard distributions and the Pareto Principle will often hold in terms of guesstimating how long varying levels of confidence will take.
Another thing to note is that hardware verification is light years ahead of software verification - there are algorithms that have been used on hardware for decades that are only just being discovered for software verification. You could make a comparison that says a distributed software system is like a modern CPU with various independent parts, and if the CPU can be verified then surely your distributed software system can too.
Some work I was doing over the summer indicates that the optimal datatype depends on the algorithm you're using.
qsort() can get 15% better performance with uint64_ts than uint32_ts (sorting identical arrays of 8 bit numbers represented differently).
On the other hand, a naive bubble sort implementation was managing 5% better performance with 16 bit datatypes over any of the others.
If you start measuring energy usage as well, it becomes even stranger - using a different datatype can make it run 5% faster, while using 10%+ less energy (or in the qsort() case, take less time, but use more energy).
qsort strikes me as a bit of a crappy benchmark because of the type erasure. Your compiler likely isn't inlining or applying vectorisation. Vectorisation would likely benefit 32bit floats on a 64bit platform more than doubles.
I'd tested a number of sort algorithms (bubble, insertion, quick, merge, counting), so also testing the sort function from stdlib seemed like a logical continuation.
It was done rather quickly, so there wasn't time to properly investigate, merely look at the numbers and go "that's strange".
I'm not sure. This was a fairly quick experiment on an Ivy Bridge processor where the actual results were of secondary concern at the time (it was proof-of-concepting a separate tool).
If you look on arXiv.org, there are publications looking more at alignment on Cortex and XCore architectures. There's probably work relating to x86, though the longer instruction pipeline makes it more complicated to reason about than the simpler architectures.
Depending on how the extension developer feels future development may pan out, requesting access to all websites is the only reasonable way to do it.
With how the update system works[1], when requesting new permissions the extension is disabled until manually reenabled. If there's even a slight possibility you may want to request access to additional sites in the future, you basically have to request "all websites" to prevent this from happening.
Since Chrome 16 there have been optional permissions around (so you only request permissions when they're needed, preventing the extension from auto-disabling), although that introduces additional overhead beyond simply requesting everything in the manifest file.
The extension update system should probably work more along the lines of that in Android - it auto-updates if the new version needs no additional permissions, but requires user input when new permissions are required. The current[1] state of auto-disable-on-update isn't ideal from either a developer or user position.
[1] as of about a year ago when I last tested this
You are right that adding new permissions will disable the extension until it is re-enabled. But android has a very similar behavior.
Still, I feel that using optional permissions and pointing out to the user why they would want enable the new permissions is the best option. Yes, it's more work for the developer.
A couple of related issues that cause me frequent problems...
- In addition to detecting when monitors are removed, detecting when they are there from the start. If I have a monitor plugged in at boot, it is generally not detected. I instead have to wait for the login screen to appear before connecting the external monitor.
- When opening a new window, there seems little logic as to which monitor it will appear on, and where (particularly annoying in combination with being limited to a single DPI setting across multiple monitors despite each needing different scaling (sudden unexpectedly huge or tiny windows) - see https://hackernews.hn/item?id=14003160 )
Role: Developer