6 minute read
Limitations
from Child Welfare Workers’ Experiences of Screening for Human Trafficking Victimization: Final Report
Though screeners acknowledged that their interactions with youth vary depending on the circumstances, they are able to utilize multiple rapport-building techniques to engage with the youth; however, they noted that these are not guaranteed to be successful (e.g., “…sometimes they’re cool with that. Sometimes they tell me where to go”). Focus group participants articulated that they regularly reword Tool items, or the flow of items, to not only make the screening more conversational, but to make the youth feel more comfortable (e.g., starting with broad questions before narrowing to specific concerns). This was particularly notable in Section G (Sexual Exploitation/Coercion/Control), as several screeners shared that the Tool language, if used verbatim, could be revictimizing for youth. Many screeners reported they often need more contextual information to inform their decisions, so they ask follow-up questions. Notably, screeners find it problematic that there is nowhere to add the contextual details they gather within the relevant sections. Determining the likelihood of trafficking is based on the totality of the information gathered, with screeners demonstrating critical thinking and consideration of circumstantial evidence. Screeners find several indicators to be particularly helpful in their investigation (i.e., tattoos, runaway behavior, debt bondage, grooming, age-inappropriate relationships, social media activity). Interestingly, many of these items could be considered more visible indicators; that is, some of these indicators might be known to the screener without the explicit cooperation of the youth. One screener (#94202) noted this:
Really, the only indicators I would say that are useful in identifying victims is the physical indicators. So, again, that would be like the child having money, the child having their hair and nails done, the child having tattoos, those physical indicators that are right in your face. Those are the easiest indicators that are like “Okay, yeah, something is definitely going on with this kid. This suggests that additional training might be helpful to build screeners’ abilities to assess for red flags that are either not as visible or more ambiguous. Focus group participants shared feedback that items in several sections or subsections were ambiguous or vague, including living conditions (section D), deceptive pay practices (section E), and inability to leave (section G). Importantly, in phase two of the Institute’s HTST work, there were two items that did not align well with the factor structure of the scale: evidence of forced labor and evidence of forced tattooing/ branding. Present data helped to illuminate potential reasons for why these particular items might not have fit within the statistical model. Regarding forced labor, it was initially suggested that this item might have been statistically problematic due to small number of youths who screened positive for that indicator.14 This remains a reasonable suggestion, though the present focus group data illuminated challenges with the sub-items for that indicator which could also have impacted data quality. First, some screeners felt that asking “do you live and work in the same place” might not be reflective of labor trafficking victims’ living situations. Some suggested broadening this question to ask if youth live with any coworkers, given that sometimes multiple victims live together and are brought to a common work site. Second, screeners questioned if youth would be able to understand what is meant by “punishment” for the item: “Can you quit or could you have quit your job at any time without punishment from your boss or supervisor?” Again, they suggested a more open-ended item. Related, one screener noted that if a youth was “unsure” if they would be punished, this would be marked as “yes.” The potential need for new or reworded items, as well as insight into the nuance of particular screeners’ processes, highlights potential inconsistencies in how the Tool is completed. This could help explain why the evidence of forced labor indicator did not fit within the two-factor model presented in the previous phase of work. Regarding tattoos, not all screeners report asking youth about this. Some rely on visual cues and may not probe further, while others delve deep and use their own tattoos to elicit conversation. Again, there exists a discrepancy in process that could lead to inaccurate data captured on Tools, and which could have influenced statistical findings in the prior report. It could also suggest a training need regarding the significance of these items. In addition to youth transparency issues, some screeners pointed out that tattoos can be a vague sign of trafficking (e.g., “…Kids are just wild and crazy and they just go get tattoos because it’s so much more popular…”), so it takes skill on the part of the screener to decipher contextual clues. Focus group participants suggested adding items to the Tool for more contextual information (e.g., who took you, who paid). Related, some screeners noted that it is unclear how to respond to scarring items if the youth is known to engage in self-injurious behavior. One screener contradicted themselves within the course of the focus group, first saying they would indicate scarring for selfinjurious behavior, then saying they were not sure. More guidance from the Department is needed. Nearly all interviewees shared that if their determination is “unsure,” “likely is,” or “definitely is,” they notify their supervisor, and a multi-disciplinary team meeting is called to determine next steps. Among focus group participants, evidence of screener subjectivity in how they make their determination arose; however, this could be interpreted as screeners being responsive to individual youth. There was also variation among focus group participants in how engaged they are with the post-screening assessment (Section I). This inconsistency was noted in the previous HTST validation report with respect to screeners providing three reasons for their determination.14 In the present study, some screeners shared they append multiple pages of notes, while another screener noted that they do not feel a need to answer item 50:
Advertisement
Because it – I don’t think that it impacts anything. I mean, if you’re – if I’m the one [completing] the training and I’m the one responsible for doing the tool, I don’t think that I should have to answer three reasons why I chose the answer I did. I mean, ask me. Have a conversation.
Finally, data were collected during the first year of the COVID-19 pandemic. Although very few screeners discussed the impacts of COVID-19 on the HTST process, the few who did noted general safety protocol changes (e.g., “… When we receive a call, we have to go through, we have to see if there’s any Covid concerns, any exposure concerns”) or challenges related to assessment (e.g., school attendance as a red flag is no longer relevant).
LIMITATIONS
The present study is not without limitations. Namely, given the methodology, these results cannot be generalized to all screeners in Florida. Although self-selection bias is a concern, the sample size for the interviews is adequate for phenomenological research1 and saturation was achieved except where noted. The focus group sample size, overall and within groups, were smaller than ideal. However, general findings were able to be triangulated with current interview data as well as data from previous phases of HTST validation work to support conclusions and recommendations. Finally, given that data was collected in 2020, it is possible that there have been updates to the HTST and relevant processes, including training, that are not captured here.