Discussion on in-premises presence : detection, usage, UIs

Here I am sharing my end-user findings and ideas on presence detection, notifications and UI, hoping that this less technical approach opens discussions and contribute to take home automation, and OH in particular, to what it should provide to end-user : simplicity, comfort, “intelligence”, universal usage, all hiding its technical complexity and advanced architecture.

I have worked on intra-building and on premises presence detection and would like to share my implementation for discussion and brainstorming. Arrival/departure is working pretty weel. Intra-building is still unsatisfying.

On premises
User location items have following states : home, arrival, departure, away. I tried ping-based detection, Owntracks, Tado location reporting, Synology Videostation geofence and none gave me acceptable reactivity. Finally I use Tasker with 3 geofences and Wifi detection to update states. I kepts ping tracking to determine who is home based on phone IP.
On Tasker side, the frequency of geolocation/Wifi detection is automatically adjuste depending on the location to improve precision and speed in detection.
A guest user item allows handling unknown users : keep some command systems disabled, such as audio/video, deactivate alarm and cameras but keep light switches enabled.

  • Wifi connected = home
  • Wifi disconnected + leaving near geofence (100m)=departure
  • Wifi disconnected + leaving far geofence (500m) or after 5 min=away
  • Wifi disconnected + entering medium geofence (200m)=arriving. If no garage door command received for 3 minutes or leaving geofence then back to away (for example when driving nearby but not returning home)

Door commands are triggered from a simple Tasker UI on the phone which also sends the name of the user supposed to arrive. Then in OH, a User_arriving_perdoor item will help OH to welcome properly the user depending from which gate it arrives (say Welcome back, Yann, tell user-related events or more)

I use all available events that report a user activity.

  • Motion detection (Zwave movement sensors) but a 30 secs delay make location unreliable
  • Door contacts
  • Wallswitches: when one zwave lighting changes state without a prior command, it means that the command was triggered from a wallswich. I implemented a switch “proxy layer” so I can see if a light state change is related to a proxy command. Advantage of the layer is that you can “lock” switches by cancelling a command (sending an OFF after an unattended ON was received)
  • Speech : the echo binding reports any speech and helps location. My concern is that not always the closest Echo responds to requests so location gets incorrect
  • IR : IR receiver help locating users
  • Cameras : presence detection reported by Synology Surveillance Station through HTTP
    All these items belong to a room group. Each room has a LastPresence_Room item with an expiration delay.

Purpose of locating people at home is mainly to allow the notification sub-system to determine the best notification method (speech, color light, TV OSD notification, push to device, immediate or delayed) depending on “notification channels” available in the room, depending on the addressees of the message (all persons at home, all persons not at home, a burglar :cowboy_hat_face:, a specific user), and depending on on-going activities (watching TV , criticity of the message and user notification preferences.

This works pretty well when 1 person is at home but location is of course unreliable when 2 or more are moving between rooms. This might be improved by some algorithmic logic using times to move between rooms or order of sensor triggering (which should give the direction of the movement).

I read with attention this tutorial and had not thought about the CO2 detection. I am sure we can find more tricky ways to improve this.

On top of this, I implemented a very simple “Alexa, follow me rule” which make Alexa tell when you enter a room and swiitch lights accordingly. It is for show-off but one can imagine triggering events based on users habits and wills with this tracking.

Running dynamic and lean Habpanel UIs (just starting with Habpanel…) that displays relevant commands depending on the user, the time of the day or the context would be nice. I am quite circumspect about home automation UIs that often look like a Boeing 747 cockpit or Tchernobyl power plants… Sure they are beautiful and a technical show-off demos for developers but they are nightmare for mates and children who are supposed to be end users. Home automation is not SCADA. Same architecture maybe, but different purposes.
UI challenge is to find the “best” interaction channel : speech, hard keys (long life to TV remotes), touchscreen : each has its strength and weakness depending on the situation.

One major point of tracking is both to provide cutting-edge automated comfort features and ensure individual privacy at the same time. On this 2nd point, I think that home automation systems miss for the moment individual privacy management. My mate and I are really touchy about this: for example, no way one can activate the cameras at home to spy his partner or locate him during the day. I believe these are interesting challenges to address.


Nice written…

Just a few questions:

Is this really fast enough?
When lights is out in a room, (and its dark) normally that would be the first thing to do, when entering. Problem is, the light switch often is very close to the door. The person could actually be halfway out of the room, when turning on the light.
The next problem is, if the person have to be inside the room, to make sure the person actually is inside, this will often be a situation where the person will have to step into the room in darkness, wait a few seconds for the light to turn on by present detection.

This is purely a matter of timing. But may also need some sort of prediction, if the neighbour rooms can detect this specific person as well. Ie.
A person who goes through 1-3 rooms before entering the actual room this person want to. Keeping track of this person, one could assume exactly where this person is suppose to go… And most often this will work. But… what if the persons stops on half way, turning around and go to another place (or back to where the person came from). This is where this become very difficult, and why its almost impossible to predict where a human is going :smiley:

Totally agree.
My UI does not even have an opputuniy to turn anything on/off. No switches, slider etc… Its not even touch screen… Insted I make use of voice commands. Or make use of the hard wired “normal” switches. My UI is suppose to be a montoring floorplan showing the whole house and its status in one screen only, (whenever I et finished with it).

Leave out the indoor cams. Concentrate on the outdoor. Maybe add one or two indoor cams with face dection. One at the frontdoor, and maybe one at the hall, to help with the present detection. (thats my plan).

Alternatively put the camera behind OpenHab (in some form), and only enable viewing if owners are not home through rules.

OH is taking care to activate/deactivate cameras in SurveillanceStation based on users presence and also notifies with TTS in relevant rooms if a camera is manually activated whereas it is supposed to be deactivated.

You are right : it is not fast enough to enter rooms already lighted. But as I wrote, this is just for show-off. Anyway, it is not very difficult to add some predictive routine (except if your house has hundreds of rooms) and determine, based on the current location and the chronology of sensor detection in which room you are heading. This can be handled in a hashmap storing (currentroom, destinationroom, probability).

This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.