Fetch your data

1. Fetch your data


Setting

In this setting, people move around and interact in a public space and we want to identify complex rule activities, such as people walking together, fighting or meeting.

Data

The data on which we build our identification is a set of recognized elementary (short-term) behaviors, such as a person running, moving abruptly or a person standing still.

The data were provided from the benchmark CAVIAR dataset.

Example

For our scenario we gather data about two persons (George and Alex) for specific moments in time, indicated by numeric timestamps. For each timestamp we hold the current person action status and the position in the space. Below we describe the gathered data both in everyday language and in the corresponding formal representation.


Plain Representation
Formal Representation
For the first timestamp (400) George's action in an abrupt motion
happensAt(abrupt(George), 400)
and we have located him in position (262, 285) of the camera picture co-ordinates.
holdsAt(coord(George)=(262, 285))
For the same timestamp (400) Alex's action in an abrupt motion
happensAt(abrupt(Alex), 400)
and we have located him in position (260, 288) of the camera picture co-ordinates.
holdsAt(coord(Alex)=(260, 288))
In the next timestamp (440) George keeps move abruptly
happensAt(abrupt(George), 440)
and we have located him in position (262, 286) of the camera picture co-ordinates.
holdsAt(coord(George)=(262, 286))
In the timestamp (440) Alex's action change in an active motion
happensAt(active(Alex), 440)
and we have located him in position (262, 285) of the camera picture co-ordinates.
holdsAt(coord(Alex)=(262, 285))
In the next timestamp (480) George's action change in an active motion
happensAt(active(George), 480)
and we have located him in position (262, 285) of the camera picture co-ordinates.
holdsAt(coord(George)=(262, 285))
In the same timestamp (480) Alex's action change again in an abrupt motion
happensAt(active(Alex), 480)
and we have located him in new position (267, 285) of the camera picture co-ordinates.
holdsAt(coord(Alex)=(267, 285))
Finally in timestamp (520) George's action change again in an active motion
happensAt(active(George), 520)
and we have located him in same position (262, 285) of the camera picture co-ordinates.
holdsAt(coord(George)=(262, 285))
In the timestamp (520) Alex's action stay in an active motion
happensAt(active(Alex), 520)
and we have located him in new position (262, 284) of the camera picture co-ordinates.
holdsAt(coord(Alex)=(262, 284))

Embed your knowledge

2. Describe your target language


Description

ILED learns a set of logical rules from data. To facilitate learning, it is useful to provide a general description of how these rules should look like, for instance, which predicates should be placed at the heads/bodies of the rules, which are the types of each variable appearing in a rule etc. Formally, ILED uses mode declarations (see here for more info).

Example

In activity recognition we would like to learn definitions of when two persons are fighting, based on their behaviours and their distance. Using mode declaration, this target language can be specified as below:


Formal Representation
Plain Representation
head( initiatedAt(fighting(+person,+person),+time))
A declaration that specifies the strucute of terms that may be placed at the heads of rules (this is indicated by the head predicate symbol). The argument inside head(.) in this declaration is a template for creating such terms, which should have the form initiatedAt(fighting(X,Y),Z). Here X, Y and Z are variables (this is indicated by the corresponding + symbol in the template) of types person, person and time respectively.
head( terminatedAt(fighting+person,+person),+time))
Similar as above. This declaration generates rule-head temrs of the form terminatedAt(fighting(X,Y),Z)
body( holdsAt(close(+person,+person,#distance), +time)))
A declaration that specifies the strucute of atoms that may be placed at the bodies of rules (this is indicated by the body predicate symbol). The argument inside body(.) is a template for terms of the form holdsAt(close(X,Y,35),Z). Here X, Y and Z are variables (as indicated by the + symbol in the template) of types person, person and time respectively, while 35 is a domain constant (as indicated by the # symbol in the template) of type distance.
body( happensAt(behavior(+person,#event),+time))
Similar as above. This declaration generates rule-body terms of the form happensAt(behavior(X,abrupt),Y) (here X and Y are variables and abrupt is a domain constant), or happensAt(behavior(X,active),Y).

Recognise events in realtime

3. ILED Perform learning


Generic placeholder image

4. See results


Learnt rules

initiatedAt(fighting(X,Y),T) :- happensAt(behavior(X,abrupt),T), holdsAt(close(X,Y,25),T).

terminatedAt(fighting(X,Y),T) :- happensAt(behavior(X,running),T), not holdsAt(close(X,Y,30),T).

Explanation

The first rule states that fighting between two persons is initiated if one of them is moving abruptly and their euclidean distance is less than 25.
The second rules states that fighting between two persons is terminated when one of them is running and their euclidean distance is more than 30.


Do you have any questions ?


See more technical details here now!
Generic placeholder image

Generic placeholder image