US 9,812,151 B1
Generating communicative behaviors for anthropomorphic virtual agents based on user's affect
Reza Amini, Secaucus, NJ (US); Ugan Yasavur, Jersey City, NJ (US); Jorge Travieso, New York, NY (US); and Chetan Dube, New York, NY (US)
Assigned to IPSOFT INCORPORATED, New York, NY (US)
Filed by IPsoft Incorporated, New York, NY (US)
Filed on Feb. 23, 2017, as Appl. No. 15/440,589.
Claims priority of provisional application 62/423,880, filed on Nov. 18, 2016.
Claims priority of provisional application 62/423,881, filed on Nov. 18, 2016.
Int. Cl. G10L 21/00 (2013.01); G10L 25/00 (2013.01); G10L 13/08 (2013.01); G10L 21/10 (2013.01); G10L 25/63 (2013.01); G06T 13/40 (2011.01); G06K 9/00 (2006.01); G10L 15/26 (2006.01)
CPC G10L 21/10 (2013.01) [G06K 9/00302 (2013.01); G06T 13/40 (2013.01); G10L 15/265 (2013.01); G10L 25/63 (2013.01)] 4 Claims
OG exemplary drawing
 
1. A method to control output of a virtual agent based on a user's affect, the computerized-method comprising:
receiving, by a computer, an utterance of a user;
determining, by the computer, an emotion vector of the user based a content of the utterance, wherein determining the emotion vector of the user comprises:
i) determining a sentiment score of the utterance by determining a sentiment score for each word in the utterance, the sentiment score based on one or more predefined rules,
ii) determining a dialogue performance score based on one or more received dialogue performance metrics,
iii) determining a vocal emotion score based on a received vocal characteristic of the user,
iv) determining a facial emotion score based on a received facial expression of the user, and
v) determining the emotion vector by averaging the sentiment score, the vocal score, the facial emotion score, and the dialogue performance score;
determining, by the computer, a mood vector of the user based on the emotion vector of the user;
determining, by the computer, a personality vector of the user based on the mood vector of the user;
determining, by the computer, at least one of a facial expression, body gesture, vocal expression, or verbal expression for the virtual agent based on a content of the utterance of the user and at least one of the emotion vector for user, the mood vector for the user and the personality vector for the user; and
applying, by the computer, the facial expression, body gesture, vocal expression, verbal expression, or any combination thereof to the virtual agent to produce control of the virtual agent's vocal expression, facial expression or both.