Mở bài
Chủ đề về trí tuệ nhân tạo (AI) và ứng dụng của nó trong các lĩnh vực xã hội, đặc biệt là trong thực thi pháp luật, đang ngày càng trở nên phổ biến trong các đề thi IELTS Reading gần đây. Đây là một chủ đề thuộc nhóm Technology & Society, thường xuyên xuất hiện ở cả ba passage với các góc độ khác nhau: từ giới thiệu công nghệ cơ bản đến phân tích sâu về tác động xã hội và đạo đức.
Trong bài viết này, bạn sẽ được trải nghiệm một bộ đề thi IELTS Reading hoàn chỉnh với ba passages tăng dần độ khó, xoay quanh câu hỏi “What Are The Social Implications Of Increasing Use Of AI In Law Enforcement?” – Những tác động xã hội của việc gia tăng sử dụng AI trong thực thi pháp luật là gì?
Bạn sẽ học được:
- Một bộ đề thi đầy đủ 3 passages từ Easy đến Hard, hoàn toàn giống thi thật
- 40 câu hỏi đa dạng với 7 dạng câu hỏi khác nhau thường gặp trong IELTS
- Đáp án chi tiết kèm giải thích cụ thể về vị trí và cách paraphrase
- Hơn 40 từ vựng quan trọng theo từng passage với phiên âm và ví dụ
- Chiến lược làm bài và phân bổ thời gian hiệu quả
Bộ đề này phù hợp cho học viên từ band 5.0 trở lên, giúp bạn làm quen với độ khó thực tế của bài thi và rèn luyện kỹ năng làm bài một cách bài bản.
1. Hướng Dẫn Làm Bài IELTS Reading
Tổng Quan Về IELTS Reading Test
IELTS Reading Test là một phần thi quan trọng, kéo dài 60 phút để hoàn thành 3 passages với tổng cộng 40 câu hỏi. Điểm số được tính dựa trên số câu trả lời đúng, không bị trừ điểm khi sai.
Phân bổ thời gian khuyến nghị:
- Passage 1 (Easy): 15-17 phút – Bài đọc ngắn nhất với câu hỏi tương đối dễ
- Passage 2 (Medium): 18-20 phút – Bài đọc trung bình với độ phức tạp tăng lên
- Passage 3 (Hard): 23-25 phút – Bài đọc dài và khó nhất, cần nhiều thời gian suy luận
Lưu ý quan trọng: Nên dành 2-3 phút cuối để chuyển đáp án vào answer sheet. Không có thời gian thêm cho việc này!
Các Dạng Câu Hỏi Trong Đề Này
Bộ đề thi này bao gồm 7 dạng câu hỏi phổ biến nhất trong IELTS Reading:
- Multiple Choice – Câu hỏi trắc nghiệm (9 câu)
- True/False/Not Given – Xác định thông tin đúng/sai/không đề cập (5 câu)
- Yes/No/Not Given – Xác định ý kiến của tác giả (5 câu)
- Matching Headings – Nối tiêu đề với đoạn văn (5 câu)
- Sentence Completion – Hoàn thành câu (6 câu)
- Matching Features – Nối thông tin với đặc điểm (5 câu)
- Short-answer Questions – Câu hỏi trả lời ngắn (5 câu)
Mỗi dạng câu hỏi yêu cầu một kỹ năng đọc hiểu và chiến lược làm bài khác nhau, giúp bạn rèn luyện toàn diện.
2. IELTS Reading Practice Test
PASSAGE 1 – The Rise of Artificial Intelligence in Modern Policing
Độ khó: Easy (Band 5.0-6.5)
Thời gian đề xuất: 15-17 phút
A. Law enforcement agencies around the world are increasingly turning to artificial intelligence (AI) to help them fight crime more effectively. From predictive policing systems that analyze crime patterns to facial recognition technology at airports and public spaces, AI is transforming how police officers do their jobs. This technological revolution promises to make communities safer while allowing police forces to use their resources more efficiently. However, as these systems become more common, questions are being raised about their accuracy, fairness, and impact on civil liberties.
B. One of the most widely used AI applications in law enforcement is predictive policing. These systems analyze historical crime data, including the locations, times, and types of crimes that have occurred in the past. By identifying patterns in this data, the AI can predict where crimes are most likely to happen in the future. Police departments can then allocate their patrols more strategically, sending officers to high-risk areas before crimes occur. Cities like Los Angeles and Chicago have reported significant reductions in certain types of crime after implementing these systems. The technology essentially acts as a sophisticated forecasting tool, similar to how meteorologists predict weather patterns.
C. Facial recognition technology represents another major AI application in policing. Modern systems can scan faces in crowds and compare them against databases of wanted criminals or missing persons in real-time. At major airports and border crossings, this technology helps identify individuals who may pose security threats. Some police forces have even equipped their officers with body-worn cameras that include facial recognition capabilities, allowing them to identify suspects immediately during street encounters. Supporters argue that this technology has helped solve numerous cases that would have otherwise remained unsolved, particularly in identifying suspects from surveillance footage.
D. AI is also being used to analyze vast amounts of digital evidence more quickly than human investigators ever could. When police seize smartphones, computers, or servers during investigations, these devices often contain millions of files. AI systems can search through this data rapidly, identifying relevant messages, images, or documents that might be evidence of criminal activity. This capability has proven particularly valuable in investigating organized crime networks, financial fraud, and online exploitation. What might take a team of human analysts months to review can now be processed by AI systems in days or even hours.
E. The use of AI in crime scene analysis and forensics is another emerging application. Machine learning algorithms can now analyze forensic evidence such as fingerprints, DNA samples, and ballistic data with increasing accuracy. Some systems can even analyze patterns in handwriting or voice recordings to help identify suspects. Additionally, AI is being used to create three-dimensional reconstructions of crime scenes, allowing investigators to explore and analyze scenes virtually long after the physical evidence has been collected and processed.
F. Despite these promising applications, the introduction of AI into law enforcement is not without controversy. Privacy advocates worry about the constant surveillance that these technologies enable. There are also concerns about algorithmic bias – the possibility that AI systems might discriminate against certain groups based on flawed training data or programming. For instance, some facial recognition systems have been found to be less accurate at identifying people with darker skin tones. Furthermore, critics question whether predictive policing systems might create self-fulfilling prophecies, where increased police presence in predicted areas leads to more arrests, which then reinforces the algorithm’s predictions in a continuous cycle.
G. Law enforcement officials generally maintain that these technologies are simply tools to assist human officers, not replace them. They emphasize that final decisions about arrests and prosecutions are always made by people, not machines. Many police departments have established oversight committees to ensure AI systems are used appropriately and to address concerns about bias and privacy. As AI technology continues to evolve, finding the right balance between public safety and civil liberties will remain an ongoing challenge for societies worldwide.
Công nghệ trí tuệ nhân tạo AI đang được ứng dụng rộng rãi trong lực lượng cảnh sát hiện đại để phát hiện tội phạm
Questions 1-13
Questions 1-5
Do the following statements agree with the information given in Passage 1?
Write:
- TRUE if the statement agrees with the information
- FALSE if the statement contradicts the information
- NOT GIVEN if there is no information on this
- AI technology in policing is only used in a few countries around the world.
- Predictive policing systems analyze past crime data to forecast where future crimes might occur.
- Los Angeles and Chicago have completely eliminated crime after using predictive policing systems.
- Facial recognition technology can identify people in real-time by comparing them to existing databases.
- All facial recognition systems work equally well for people of all skin tones.
Questions 6-9
Choose the correct letter, A, B, C, or D.
-
According to paragraph B, predictive policing is compared to:
- A. a security system
- B. weather forecasting
- C. a crime database
- D. police patrol methods
-
AI systems used to analyze digital evidence are particularly useful for investigating:
- A. petty theft
- B. traffic violations
- C. organized crime networks
- D. missing persons cases
-
What advantage does AI offer in forensic analysis?
- A. It completely replaces human investigators
- B. It can create virtual crime scene reconstructions
- C. It eliminates the need for physical evidence
- D. It makes all evidence collection unnecessary
-
Privacy advocates are concerned that AI in law enforcement might lead to:
- A. reduced crime rates
- B. more efficient policing
- C. constant surveillance
- D. better criminal identification
Questions 10-13
Complete the sentences below.
Choose NO MORE THAN TWO WORDS from the passage for each answer.
- Police officers can now wear special cameras that include __ capabilities.
- AI can process digital evidence from seized devices in days instead of the __ it would take humans.
- Critics worry that predictive policing might create __ where predictions become reality.
- Many police departments have created __ to ensure AI is used properly.
PASSAGE 2 – Balancing Security and Privacy in the Age of AI Policing
Độ khó: Medium (Band 6.0-7.5)
Thời gian đề xuất: 18-20 phút
The integration of artificial intelligence into law enforcement operations has sparked an intense debate about the fundamental trade-off between security and privacy in democratic societies. While proponents argue that AI-powered policing tools are essential for maintaining public safety in an increasingly complex world, critics contend that these technologies pose unprecedented threats to civil liberties and may exacerbate existing inequalities in the justice system. Understanding these competing perspectives is crucial for developing regulatory frameworks that can harness the benefits of AI while protecting individual rights.
The Security Case for AI-Enhanced Law Enforcement
Advocates of AI in policing emphasize its potential to save lives and prevent crime before it occurs. Predictive analytics, they argue, allows police departments to deploy resources proactively rather than merely responding to incidents after they happen. In an era of constrained budgets and growing urban populations, this efficiency gain is not trivial. Studies conducted by research institutions have suggested that some predictive policing programs have correlated with reductions in certain crime categories, particularly property crimes like burglary and vehicle theft. The technology’s supporters point to these outcomes as evidence that AI can deliver measurable improvements in public safety.
Moreover, AI systems excel at processing and connecting information across multiple sources – a capability that has proven invaluable in counterterrorism efforts and investigations of transnational criminal organizations. Human analysts can easily become overwhelmed by the sheer volume of data generated by modern surveillance systems, social media, financial transactions, and communication networks. AI algorithms can identify suspicious patterns and connections that might otherwise go unnoticed, potentially preventing terrorist attacks or dismantling criminal enterprises before they can cause significant harm. The 2019 arrest of a major human trafficking network in Europe, facilitated by AI analysis of travel patterns and financial records, exemplifies this potential.
Privacy Concerns and Civil Liberties
However, civil liberties organizations have raised substantial objections to the expanding use of AI in policing. Their primary concern centers on the erosion of privacy in public spaces. Facial recognition technology, in particular, has drawn sharp criticism for enabling what critics describe as mass surveillance on an unprecedented scale. Unlike traditional policing methods, which require officers to have reasonable suspicion before investigating individuals, AI systems can continuously monitor everyone in their range, creating detailed records of people’s movements and associations without any individualized justification.
The permanence and searchability of this data compounds privacy concerns. In the past, a person walking down a street might have been observed by a police officer, but that observation was ephemeral – it existed only in the officer’s memory. Today, facial recognition systems create permanent, searchable records of people’s public activities. As the American Civil Liberties Union has argued, this transformation effectively eliminates the practical obscurity that has traditionally protected privacy in public spaces. The potential for abuse is significant: authoritarian governments have already weaponized similar technologies to track and suppress political dissidents, demonstrating that the dystopian scenarios feared by critics are not merely theoretical.
Algorithmic Bias and Discriminatory Outcomes
Beyond privacy concerns, mounting evidence suggests that AI systems used in law enforcement may perpetuate and amplify existing biases in the criminal justice system. Multiple studies have documented that facial recognition technologies exhibit differential accuracy rates across demographic groups, with error rates for people of color – particularly women of color – being significantly higher than for white males. In 2020, researchers at MIT found that some commercial facial recognition systems had error rates exceeding 30% for dark-skinned females while maintaining near-perfect accuracy for light-skinned males.
The implications of these disparities are profound. When AI systems make more mistakes identifying certain groups, members of those groups face heightened risks of misidentification, wrongful arrest, and unjust prosecution. Several documented cases have already emerged where individuals were arrested based on incorrect facial recognition matches, causing significant harm to innocent people. These incidents underscore critics’ warnings that deploying imperfect AI systems in consequential contexts like policing can systematize discrimination rather than promote objectivity.
Predictive policing algorithms face similar criticisms. Because these systems are trained on historical crime data, they may inherit the biases present in that data. If police have historically over-patrolled certain neighborhoods or disproportionately arrested members of particular communities, the algorithm will learn to direct more police attention to those same areas and populations. This creates a feedback loop where biased policing practices generate biased data, which produces biased predictions, which justify continued biased policing. Rather than making law enforcement more objective, critics argue, these systems may simply automate discrimination while lending it a veneer of technological neutrality.
Toward Responsible AI Governance
Addressing these challenges requires comprehensive governance frameworks that can accommodate both security needs and rights protections. Some jurisdictions have begun experimenting with such approaches. The European Union’s proposed regulations on AI would classify law enforcement applications as “high-risk,” subjecting them to rigorous testing requirements, transparency obligations, and human oversight mandates. Several American cities, including San Francisco and Boston, have enacted outright bans on government use of facial recognition technology, concluding that the risks outweigh potential benefits.
Other experts advocate for middle-ground solutions that would permit AI use in policing but with substantial safeguards. These might include independent audits of algorithms for bias, strict limits on data retention periods, requirements for warrants before accessing certain types of AI-generated information, and meaningful mechanisms for redress when systems make errors. Proponents of these approaches argue that categorically rejecting AI in law enforcement would abandon potentially valuable tools, while uncritical adoption would be equally irresponsible. The challenge lies in calibrating regulations that can distinguish between acceptable and unacceptable applications while remaining flexible enough to accommodate technological evolution.
Sự cân bằng giữa an ninh công cộng và quyền riêng tư cá nhân trong thời đại AI cảnh sát
Questions 14-26
Questions 14-18
Choose the correct letter, A, B, C, or D.
-
According to the passage, proponents of AI in policing believe it helps:
- A. completely eliminate all types of crime
- B. deploy police resources more proactively
- C. reduce the need for human police officers
- D. create perfect surveillance systems
-
The 2019 arrest of a human trafficking network in Europe demonstrates:
- A. that AI is infallible in criminal investigations
- B. the ability of AI to analyze patterns across multiple data sources
- C. that human analysts are no longer needed
- D. the failure of traditional policing methods
-
What is the “practical obscurity” referred to in the passage?
- A. Police officers’ poor memory
- B. The difficulty of seeing in public spaces
- C. The traditional protection of privacy through impermanence of observations
- D. The inability of old technology to record information
-
Research from MIT in 2020 found that facial recognition systems:
- A. work equally well for all demographic groups
- B. have higher error rates for dark-skinned females
- C. should never be used by police
- D. are completely unreliable
-
Predictive policing algorithms may inherit biases because:
- A. programmers deliberately include discrimination
- B. they are trained on historical data that reflects past biased practices
- C. AI systems are inherently prejudiced
- D. police departments want to discriminate
Questions 19-23
Complete the summary below.
Choose NO MORE THAN TWO WORDS from the passage for each answer.
AI in law enforcement has created a debate between security and privacy. Supporters say predictive analytics allows (19) __ to be deployed proactively, leading to crime reductions. AI can also process large amounts of data to identify (20) __ that humans might miss. However, critics worry about the (21) __ in public spaces created by technologies like facial recognition. Unlike traditional methods that require (22) __ before investigation, AI systems can monitor everyone continuously. There are also concerns about (23) __ in these systems, which show different accuracy rates for different demographic groups.
Questions 24-26
Do the following statements agree with the views of the writer in Passage 2?
Write:
- YES if the statement agrees with the views of the writer
- NO if the statement contradicts the views of the writer
- NOT GIVEN if it is impossible to say what the writer thinks about this
- The complete rejection of AI in law enforcement would mean losing potentially valuable tools.
- All European countries have banned facial recognition technology.
- Finding the right balance between security and privacy is a simple task.
PASSAGE 3 – The Sociopolitical Ramifications of AI-Mediated Law Enforcement: A Critical Analysis
Độ khó: Hard (Band 7.0-9.0)
Thời gian đề xuất: 23-25 phút
The proliferation of artificial intelligence systems within law enforcement institutions represents far more than a mere technological upgrade of policing capabilities; it constitutes a fundamental reconfiguration of the relationship between citizens and state power. This transformation carries profound implications that extend beyond immediate concerns about privacy and bias to encompass questions about democratic accountability, the stratification of surveillance, and the potential emergence of technocratic governance models that may circumvent traditional legal and political checks on executive authority. Understanding these deeper sociopolitical dimensions requires eschewing both technophilic optimism and Luddite pessimism in favor of a nuanced analysis that recognizes how AI systems are embedded within and shaped by existing power structures and social inequalities.
The Opacity Problem and Democratic Accountability
A critical yet frequently underappreciated challenge posed by AI in law enforcement concerns the opacity of machine learning algorithms, particularly those employing deep learning architectures. These systems often function as “black boxes,” producing outputs through processes that are inscrutable not only to the general public but also to the police officers using them and, in many cases, even to their designers. This opacity creates a deficit in democratic accountability. When an algorithm recommends increased policing in a particular neighborhood or flags an individual as high-risk, the logical chain connecting input data to output recommendation may be impossible to interrogate or contest meaningfully.
This problem is exacerbated by the fact that many law enforcement agencies procure AI systems from private vendors who claim proprietary rights over their algorithms, treating them as trade secrets protected from public disclosure. When defendants in criminal cases have sought access to the code of algorithms used against them – exercising what would seem to be a fundamental due process right to confront the evidence – courts have sometimes sided with vendors’ intellectual property claims, effectively placing corporate confidentiality above constitutional protections. This corporatization of criminal justice processes raises disturbing questions about whether profit-seeking entities should wield such influence over the machinery of state coercion.
Moreover, the technical expertise required to audit AI systems meaningfully creates a knowledge asymmetry between technology companies, government agencies, and the public. Even when algorithms are theoretically open to inspection, few individuals or organizations possess the specialized competencies necessary to evaluate them effectively. This dynamic risks creating a technocratic elite whose pronouncements about algorithmic fairness and accuracy must be accepted on faith by the broader public, undermining the accessibility and intelligibility that democratic governance traditionally demands.
Surveillance Capitalism and the Stratification of Privacy
The integration of AI into law enforcement must also be understood within the broader context of what scholar Shoshana Zuboff has termed “surveillance capitalism” – the economic model that commodifies personal data and behavioral predictions. The same technologies that enable AI policing – facial recognition, predictive analytics, behavioral modeling – have been developed primarily by private companies seeking to monetize consumer surveillance. The infrastructure of cameras, sensors, and data systems that feeds AI policing tools is often owned and operated by corporations that simultaneously serve commercial clients seeking consumer insights.
This dual-use nature of surveillance technology creates troubling conflicts of interest and enables what we might call the “stratification of privacy.” Wealthy individuals and communities can opt out of various forms of surveillance and data collection – living in gated communities without public cameras, using privacy-protecting technologies, and affording legal expertise to contest unwanted intrusions. Meanwhile, low-income communities, which are often disproportionately policed, lack these options. They become both the primary targets of AI-powered law enforcement and the primary sources of data used to train these systems, while having minimal voice in decisions about their deployment. This creates a form of “privacy poverty” that reinforces broader patterns of social marginalization.
Furthermore, the profit motives driving surveillance capitalism create perverse incentives for expansive data collection and perpetual technological escalation. Companies developing AI policing tools have financial interests in promoting their products’ capabilities and downplaying their limitations. They benefit from expanding the scope and intensity of surveillance, potentially encouraging law enforcement agencies to adopt more intrusive technologies than public safety genuinely requires. The result is what some critics have termed “surveillance creep” – the gradual normalization of monitoring practices that would previously have been considered dystopian.
The Automation of Inequality and Systemic Discrimination
Perhaps the most profound social implication of AI in law enforcement concerns its potential to entrench and legitimize systemic discrimination through the veneer of objectivity that technological systems carry. As legal scholar and AI ethics researcher Virginia Eubanks has documented, automated decision-making systems consistently produce disparate impacts on marginalized communities, effectively automating inequality. In the law enforcement context, this occurs through multiple mechanisms.
First, as previously noted, training algorithms on biased historical data perpetuates past discrimination. But the problem runs deeper: the very categories and parameters that algorithms use to assess risk may encode discriminatory assumptions. For instance, if a risk assessment algorithm considers neighborhood as a factor – and neighborhoods remain highly segregated by race and class – the system will inevitably produce racially disparate outcomes, even if it never explicitly considers race. Such systems operationalize what sociologists call “structural racism” – discrimination that persists not through individual prejudice but through ostensibly neutral institutional practices.
Second, the quantification imperatives of algorithmic systems require reducing complex social realities to measurable variables. This process inevitably obscures important contextual factors and nuances. An algorithm might flag someone as high-risk based on their associates or location, without understanding that these connections reflect constrained economic opportunities or residential segregation rather than criminal propensity. By stripping away social context, AI systems can reify patterns of disadvantage, treating them as immutable characteristics of individuals rather than contingent outcomes of unjust social arrangements.
Third, and most insidiously, AI systems can create a self-fulfilling prophecy that is difficult to disrupt. When algorithms direct police to certain neighborhoods, crime is more likely to be detected there – not necessarily because more crime occurs there, but because surveillance is concentrated there. This detected crime then becomes new training data, reinforcing the algorithm’s original bias. Over time, this creates what mathematician Cathy O’Neil calls a “pernicious feedback loop” where algorithmic predictions become reality-distorting rather than reality-reflecting. The system generates its own justification, making it nearly impossible to disentangle discriminatory outcomes from the data used to defend them.
Toward Democratic Control of Algorithmic Governance
Addressing these multifaceted challenges requires moving beyond piecemeal technical fixes toward comprehensive democratic governance of AI systems. Some scholars advocate for “algorithmic accountability” frameworks that would require transparency, auditability, and contestability as preconditions for deploying AI in consequential domains like law enforcement. Others argue that certain applications – such as facial recognition in public spaces – should be prohibited outright, concluding that no amount of refinement can render them compatible with civil liberties.
A promising direction involves participatory technology assessment processes that give affected communities meaningful input into decisions about AI deployment. Rather than treating AI adoption as a purely technical question best left to experts, this approach recognizes it as a profoundly political choice that should be subject to democratic deliberation. Some cities have established community oversight boards with authority to review and reject proposed surveillance technologies, demonstrating that citizen control over these systems is feasible.
Ultimately, the social implications of AI in law enforcement cannot be separated from broader questions about the purposes of policing itself and the kind of society we wish to create. Technology is not deterministic; the same AI capabilities could theoretically be employed to make law enforcement more equitable or more discriminatory, depending on how they are governed and toward what ends they are deployed. The trajectory we follow will be determined not by technological inevitability but by political choices – choices that require informed civic engagement and sustained critical scrutiny of claims made by both technology vendors and government agencies. Only through such democratic vigilance can we hope to ensure that AI serves human flourishing rather than becoming another instrument of social control and stratification.
Phân tích tác động xã hội và chính trị sâu sắc của việc sử dụng AI trong hệ thống thực thi pháp luật
Questions 27-40
Questions 27-31
Choose the correct letter, A, B, C, or D.
-
According to the passage, the “opacity problem” refers to:
- A. poor lighting in police stations
- B. the difficulty in understanding how AI algorithms make decisions
- C. lack of public interest in AI technology
- D. police officers’ unwillingness to use technology
-
The author suggests that corporate claims of trade secrets in AI algorithms:
- A. are always justified
- B. should be respected above all else
- C. may conflict with constitutional rights of defendants
- D. are completely illegal
-
“Surveillance capitalism” as described in the passage involves:
- A. government control of all businesses
- B. making money from personal data and behavioral predictions
- C. eliminating all forms of privacy
- D. preventing companies from using technology
-
The concept of “privacy poverty” suggests that:
- A. poor people don’t care about privacy
- B. wealthy individuals face more surveillance
- C. low-income communities lack options to avoid surveillance
- D. privacy is only for rich people
-
The “pernicious feedback loop” described by Cathy O’Neil occurs when:
- A. algorithms improve over time
- B. algorithmic predictions reinforce themselves through biased data collection
- C. police officers reject AI recommendations
- D. communities demand more surveillance
Questions 32-36
Match each researcher/scholar with their contribution mentioned in the passage.
Choose the correct letter, A-G.
List of Contributions:
- A. Documented how automated systems produce disparate impacts on marginalized communities
- B. Coined the term “surveillance capitalism”
- C. Described feedback loops where predictions become self-fulfilling
- D. Developed facial recognition technology
- E. Proved AI systems are always biased
- F. Created the first predictive policing algorithm
- G. Established the first community oversight board
- Shoshana Zuboff
- Virginia Eubanks
- Cathy O’Neil
Questions 35-36: According to the passage, which TWO of the following are suggested as approaches to governing AI in law enforcement?
- A. Allowing companies complete control over algorithms
- B. Prohibiting certain applications like facial recognition in public spaces
- C. Establishing participatory technology assessment with community input
- D. Eliminating all technology from policing
- E. Keeping all AI systems secret from the public
Questions 37-40
Answer the questions below.
Choose NO MORE THAN THREE WORDS from the passage for each answer.
- What type of architecture makes many AI systems particularly difficult to understand?
- What creates a gap between technology companies and the public regarding AI evaluation?
- What term describes when someone is treated as having unchangeable characteristics based on their circumstances?
- According to the author, what will determine the trajectory of AI in law enforcement rather than technological inevitability?
3. Answer Keys – Đáp Án
PASSAGE 1: Questions 1-13
- FALSE
- TRUE
- FALSE
- TRUE
- FALSE
- B
- C
- B
- C
- facial recognition
- months
- self-fulfilling prophecies
- oversight committees
PASSAGE 2: Questions 14-26
- B
- B
- C
- B
- B
- resources / police resources
- suspicious patterns
- mass surveillance
- reasonable suspicion
- algorithmic bias / bias
- YES
- NOT GIVEN
- NO
PASSAGE 3: Questions 27-40
- B
- C
- B
- C
- B
- B
- A
- C
- B
- C
- deep learning architectures
- knowledge asymmetry / technical expertise
- immutable characteristics
- political choices
4. Giải Thích Đáp Án Chi Tiết
Passage 1 – Giải Thích
Câu 1: FALSE
- Dạng câu hỏi: True/False/Not Given
- Từ khóa: AI technology, only used, few countries
- Vị trí trong bài: Đoạn A, dòng 1-2
- Giải thích: Câu hỏi nói AI chỉ được dùng ở một vài quốc gia, nhưng bài đọc nói “Law enforcement agencies around the world are increasingly turning to artificial intelligence” (các cơ quan thực thi pháp luật trên toàn thế giới đang ngày càng chuyển sang sử dụng AI). “Around the world” trái ngược với “only in a few countries”, vì vậy đáp án là FALSE.
Câu 2: TRUE
- Dạng câu hỏi: True/False/Not Given
- Từ khóa: Predictive policing systems, analyze past crime data, forecast future crimes
- Vị trí trong bài: Đoạn B, dòng 2-4
- Giải thích: Bài đọc nói: “These systems analyze historical crime data… By identifying patterns in this data, the AI can predict where crimes are most likely to happen in the future.” Đây chính xác là paraphrase của câu hỏi: “historical crime data” = “past crime data”, “predict” = “forecast”. Đáp án là TRUE.
Câu 3: FALSE
- Dạng câu hỏi: True/False/Not Given
- Từ khóa: Los Angeles, Chicago, completely eliminated crime
- Vị trí trong bài: Đoạn B, dòng 6-7
- Giải thích: Bài viết nói “Cities like Los Angeles and Chicago have reported significant reductions in certain types of crime” (giảm đáng kể một số loại tội phạm nhất định), không phải “completely eliminated” (loại bỏ hoàn toàn). Đây là sự khác biệt quan trọng: “significant reductions” ≠ “completely eliminated”. Đáp án là FALSE.
Câu 6: B
- Dạng câu hỏi: Multiple Choice
- Từ khóa: predictive policing, compared to
- Vị trí trong bài: Đoạn B, dòng 8-9
- Giải thích: Bài đọc nói rõ: “The technology essentially acts as a sophisticated forecasting tool, similar to how meteorologists predict weather patterns.” Đây là phép so sánh trực tiếp giữa predictive policing và dự báo thời tiết (weather forecasting). Đáp án là B.
Câu 10: facial recognition
- Dạng câu hỏi: Sentence Completion
- Từ khóa: police officers, wear, special cameras
- Vị trí trong bài: Đoạn C, dòng 4-5
- Giải thích: “Some police forces have even equipped their officers with body-worn cameras that include facial recognition capabilities.” Câu hỏi hỏi về khả năng mà camera có, đáp án là “facial recognition” (nhận diện khuôn mặt).
Câu 12: self-fulfilling prophecies
- Dạng câu hỏi: Sentence Completion
- Từ khóa: critics, predictive policing, create
- Vị trí trong bài: Đoạn F, dòng 6-8
- Giải thích: Bài đọc nói: “critics question whether predictive policing systems might create self-fulfilling prophecies, where increased police presence in predicted areas leads to more arrests.” Đây là cụm từ chính xác trong bài, chỉ về vòng lặp tự chứng minh của thuật toán.
Passage 2 – Giải Thích
Câu 14: B
- Dạng câu hỏi: Multiple Choice
- Từ khóa: proponents, AI in policing, helps
- Vị trí trong bài: Đoạn 2, dòng 2-3
- Giải thích: “Predictive analytics, they argue, allows police departments to deploy resources proactively rather than merely responding to incidents after they happen.” Đây chính xác là đáp án B – deploy police resources more proactively. Các đáp án khác không được đề cập hoặc quá cực đoan (như “completely eliminate all types of crime”).
Câu 17: B
- Dạng câu hỏi: Multiple Choice
- Từ khóa: MIT, 2020, facial recognition systems
- Vị trí trong bài: Đoạn “Algorithmic Bias”, dòng 4-6
- Giải thích: “In 2020, researchers at MIT found that some commercial facial recognition systems had error rates exceeding 30% for dark-skinned females while maintaining near-perfect accuracy for light-skinned males.” Đây chính xác là đáp án B – các hệ thống có tỷ lệ lỗi cao hơn đối với phụ nữ da tối.
Câu 19: resources / police resources
- Dạng câu hỏi: Summary Completion
- Từ khóa: predictive analytics, deployed proactively
- Vị trí trong bài: Đoạn 2, dòng 2-3
- Giải thích: “Predictive analytics allows police departments to deploy resources proactively.” Từ cần điền là “resources” hoặc “police resources” – nguồn lực được triển khai một cách chủ động.
Câu 24: YES
- Dạng câu hỏi: Yes/No/Not Given
- Từ khóa: complete rejection, AI, losing valuable tools
- Vị trí trong bài: Đoạn cuối, dòng 4-5
- Giải thích: “Proponents of these approaches argue that categorically rejecting AI in law enforcement would abandon potentially valuable tools.” Tác giả trình bày quan điểm này một cách tán thành, cho thấy đồng ý rằng việc từ chối hoàn toàn AI sẽ đáng tiếc. Đáp án là YES.
Câu 26: NO
- Dạng câu hỏi: Yes/No/Not Given
- Từ khóa: finding balance, security, privacy, simple task
- Vị trí trong bài: Đoạn cuối, dòng 6-7
- Giải thích: “The challenge lies in calibrating regulations…” – từ “challenge” cho thấy đây không phải là nhiệm vụ đơn giản. Tác giả nhấn mạnh sự phức tạp của việc cân bằng này, mâu thuẫn với “simple task” trong câu hỏi. Đáp án là NO.
Passage 3 – Giải Thích
Câu 27: B
- Dạng câu hỏi: Multiple Choice
- Từ khóa: opacity problem, refers to
- Vị trí trong bài: Đoạn “The Opacity Problem”, dòng 1-4
- Giải thích: “A critical yet frequently underappreciated challenge posed by AI in law enforcement concerns the opacity of machine learning algorithms… These systems often function as ‘black boxes,’ producing outputs through processes that are inscrutable.” “Opacity” ở đây chỉ về việc khó hiểu cách thuật toán đưa ra quyết định. Đáp án là B.
Câu 29: B
- Dạng câu hỏi: Multiple Choice
- Từ khóa: Surveillance capitalism, involves
- Vị trí trong bài: Đoạn “Surveillance Capitalism”, dòng 1-2
- Giải thích: “…what scholar Shoshana Zuboff has termed ‘surveillance capitalism’ – the economic model that commodifies personal data and behavioral predictions.” “Commodifies” có nghĩa là biến thành hàng hóa để kiếm tiền (making money from). Đáp án là B.
Câu 32: B – Shoshana Zuboff
- Dạng câu hỏi: Matching Features
- Vị trí trong bài: Đoạn “Surveillance Capitalism”, dòng 1
- Giải thích: “…what scholar Shoshana Zuboff has termed ‘surveillance capitalism’.” Rõ ràng bà đã đặt ra thuật ngữ này. Đáp án là B.
Câu 37: deep learning architectures
- Dạng câu hỏi: Short-answer Question
- Từ khóa: architecture, AI systems, difficult to understand
- Vị trí trong bài: Đoạn “The Opacity Problem”, dòng 2
- Giải thích: “…the opacity of machine learning algorithms, particularly those employing deep learning architectures.” Đây là loại kiến trúc khiến AI khó hiểu. Đáp án là “deep learning architectures” (3 từ, vừa đủ giới hạn).
Câu 40: political choices
- Dạng câu hỏi: Short-answer Question
- Từ khóa: determine trajectory, rather than technological inevitability
- Vị trí trong bài: Đoạn cuối, dòng 8-9
- Giải thích: “The trajectory we follow will be determined not by technological inevitability but by political choices.” Câu này nói rõ là “political choices” (các lựa chọn chính trị) sẽ quyết định hướng đi của AI trong thực thi pháp luật, không phải sự tất yếu của công nghệ.
5. Từ Vựng Quan Trọng Theo Passage
Passage 1 – Essential Vocabulary
| Từ vựng | Loại từ | Phiên âm | Nghĩa tiếng Việt | Ví dụ từ bài | Collocation |
|---|---|---|---|---|---|
| artificial intelligence | n | /ˌɑːtɪˈfɪʃəl ɪnˈtelɪdʒəns/ | trí tuệ nhân tạo | Law enforcement agencies are turning to artificial intelligence | artificial intelligence system, artificial intelligence technology |
| predictive policing | n | /prɪˈdɪktɪv pəˈliːsɪŋ/ | cảnh sát dự đoán | Predictive policing systems analyze crime patterns | predictive policing program, predictive policing algorithm |
| facial recognition | n | /ˈfeɪʃəl ˌrekəɡˈnɪʃən/ | nhận diện khuôn mặt | Facial recognition technology at airports | facial recognition system, facial recognition software |
| allocate | v | /ˈæləkeɪt/ | phân bổ, phân phối | Police can allocate their patrols more strategically | allocate resources, allocate budget |
| surveillance | n | /sɜːˈveɪləns/ | sự giám sát, theo dõi | identifying suspects from surveillance footage | surveillance camera, under surveillance |
| forensic | adj | /fəˈrensɪk/ | thuộc pháp y | AI can analyze forensic evidence | forensic analysis, forensic science |
| algorithmic bias | n | /ˌælɡəˈrɪðmɪk ˈbaɪəs/ | thiên vị thuật toán | concerns about algorithmic bias | reduce algorithmic bias, detect algorithmic bias |
| discrimination | n | /dɪˌskrɪmɪˈneɪʃən/ | sự phân biệt đối xử | AI systems might discriminate against certain groups | racial discrimination, face discrimination |
| self-fulfilling prophecy | n | /ˌself fʊlˈfɪlɪŋ ˈprɒfəsi/ | lời tiên tri tự ứng nghiệm | predictive policing might create self-fulfilling prophecies | become a self-fulfilling prophecy |
| oversight committee | n | /ˈəʊvəsaɪt kəˈmɪti/ | ủy ban giám sát | police departments have established oversight committees | independent oversight committee |
| maintain | v | /meɪnˈteɪn/ | duy trì, khẳng định | officials maintain that these are just tools | maintain that, maintain order |
| evolve | v | /iˈvɒlv/ | phát triển, tiến hóa | as AI technology continues to evolve | evolve over time, continuously evolve |
Passage 2 – Essential Vocabulary
| Từ vựng | Loại từ | Phiên âm | Nghĩa tiếng Việt | Ví dụ từ bài | Collocation |
|---|---|---|---|---|---|
| fundamental trade-off | n | /ˌfʌndəˈmentəl ˈtreɪd ɒf/ | sự đánh đổi cơ bản | fundamental trade-off between security and privacy | make a trade-off, involve a trade-off |
| exacerbate | v | /ɪɡˈzæsəbeɪt/ | làm trầm trọng thêm | may exacerbate existing inequalities | exacerbate the problem, exacerbate tensions |
| regulatory framework | n | /ˈreɡjələtəri ˈfreɪmwɜːk/ | khung quy định | developing regulatory frameworks | establish a regulatory framework |
| deploy | v | /dɪˈplɔɪ/ | triển khai | deploy resources proactively | deploy technology, deploy forces |
| counterterrorism | n | /ˌkaʊntərˈterərɪzəm/ | chống khủng bố | invaluable in counterterrorism efforts | counterterrorism operations, counterterrorism measures |
| erosion | n | /ɪˈrəʊʒən/ | sự xói mòn | erosion of privacy in public spaces | erosion of rights, gradual erosion |
| mass surveillance | n | /mæs səˈveɪləns/ | giám sát đại chúng | enabling mass surveillance on unprecedented scale | conduct mass surveillance, mass surveillance program |
| reasonable suspicion | n | /ˈriːzənəbəl səˈspɪʃən/ | nghi ngờ hợp lý | require reasonable suspicion before investigating | have reasonable suspicion, based on reasonable suspicion |
| perpetuate | v | /pəˈpetʃueɪt/ | duy trì, kéo dài | may perpetuate and amplify existing biases | perpetuate discrimination, perpetuate inequality |
| differential | adj | /ˌdɪfəˈrenʃəl/ | khác biệt, phân biệt | exhibit differential accuracy rates | differential treatment, differential impact |
| feedback loop | n | /ˈfiːdbæk luːp/ | vòng phản hồi | creates a feedback loop | positive feedback loop, negative feedback loop |
| comprehensive | adj | /ˌkɒmprɪˈhensɪv/ | toàn diện | comprehensive governance frameworks | comprehensive approach, comprehensive study |
| calibrate | v | /ˈkælɪbreɪt/ | điều chỉnh chính xác | calibrating regulations | carefully calibrate, calibrate the system |
| redress | n | /rɪˈdres/ | sự bồi thường, khắc phục | mechanisms for redress when systems make errors | seek redress, provide redress |
| categorically | adv | /ˌkætəˈɡɒrɪkəli/ | một cách dứt khoát | categorically rejecting AI in law enforcement | categorically deny, categorically refuse |
Passage 3 – Essential Vocabulary
| Từ vựng | Loại từ | Phiên âm | Nghĩa tiếng Việt | Ví dụ từ bài | Collocation |
|---|---|---|---|---|---|
| proliferation | n | /prəˌlɪfəˈreɪʃən/ | sự gia tăng nhanh | the proliferation of AI systems | nuclear proliferation, rapid proliferation |
| reconfiguration | n | /ˌriːkənˌfɪɡəˈreɪʃən/ | sự cấu hình lại | a fundamental reconfiguration of relationships | require reconfiguration, undergo reconfiguration |
| stratification | n | /ˌstrætɪfɪˈkeɪʃən/ | sự phân tầng | the stratification of surveillance | social stratification, economic stratification |
| circumvent | v | /ˌsɜːkəmˈvent/ | phá vỡ, lách luật | circumvent traditional legal checks | circumvent the law, circumvent restrictions |
| eschewing | v | /ɪsˈtʃuːɪŋ/ | tránh xa, từ bỏ | eschewing both optimism and pessimism | eschew violence, eschew publicity |
| opacity | n | /əʊˈpæsəti/ | sự mờ đục, khó hiểu | the opacity of machine learning algorithms | algorithmic opacity, opacity of the system |
| inscrutable | adj | /ɪnˈskruːtəbəl/ | khó hiểu, khó đoán | processes that are inscrutable | remain inscrutable, inscrutable expression |
| interrogate | v | /ɪnˈterəɡeɪt/ | thẩm vấn, xem xét kỹ | impossible to interrogate or contest | interrogate the data, interrogate suspects |
| exacerbate | v | /ɪɡˈzæsəbeɪt/ | làm trầm trọng thêm | this problem is exacerbated by | exacerbate the situation, exacerbate tensions |
| proprietary | adj | /prəˈpraɪətəri/ | thuộc sở hữu riêng | claim proprietary rights over algorithms | proprietary technology, proprietary information |
| commodify | v | /kəˈmɒdɪfaɪ/ | biến thành hàng hóa | commodifies personal data | commodify labor, commodify education |
| monetize | v | /ˈmʌnɪtaɪz/ | kiếm tiền từ | seeking to monetize consumer surveillance | monetize content, monetize data |
| dual-use | adj | /ˈdjuːəl juːs/ | sử dụng kép | dual-use nature of surveillance technology | dual-use technology, dual-use goods |
| marginalization | n | /ˌmɑːdʒɪnəlaɪˈzeɪʃən/ | sự đẩy ra lề | patterns of social marginalization | economic marginalization, political marginalization |
| perverse incentive | n | /pəˈvɜːs ɪnˈsentɪv/ | động cơ sai lệch | creates perverse incentives | create perverse incentives, perverse incentive structure |
| entrench | v | /ɪnˈtrentʃ/ | củng cố, ăn sâu | potential to entrench discrimination | deeply entrenched, entrench inequality |
| veneer | n | /vəˈnɪə(r)/ | lớp vỏ bọc, vẻ ngoài | through the veneer of objectivity | veneer of respectability, thin veneer |
| operationalize | v | /ˌɒpəˈreɪʃənəlaɪz/ | thực thi, vận hành | systems operationalize structural racism | operationalize the concept, operationalize policies |
| reify | v | /ˈreɪɪfaɪ/ | vật hóa, cụ thể hóa | AI systems can reify patterns | reify abstract concepts, reify social constructs |
| pernicious | adj | /pəˈnɪʃəs/ | có hại, nguy hiểm | a pernicious feedback loop | pernicious effect, pernicious influence |
| disentangle | v | /ˌdɪsɪnˈtæŋɡəl/ | tháo gỡ, phân tách | impossible to disentangle outcomes | disentangle facts, disentangle the truth |
| piecemeal | adj | /ˈpiːsmiːl/ | từng mảnh, không có hệ thống | beyond piecemeal technical fixes | piecemeal approach, piecemeal reform |
| auditability | n | /ˌɔːdɪtəˈbɪləti/ | tính có thể kiểm toán | require transparency and auditability | ensure auditability, auditability standards |
| contestability | n | /kənˌtestəˈbɪləti/ | tính có thể tranh luận | transparency and contestability | ensure contestability, contestability of decisions |
| deterministic | adj | /dɪˌtɜːmɪˈnɪstɪk/ | mang tính quyết định | technology is not deterministic | deterministic approach, deterministic system |
Kết bài
Chủ đề về những tác động xã hội của việc sử dụng AI trong thực thi pháp luật là một trong những chủ đề “nóng” và thường xuyên xuất hiện trong các bài thi IELTS Reading hiện đại. Qua bộ đề thi mẫu này, bạn đã được trải nghiệm một bài thi hoàn chỉnh với ba passages ở các mức độ khác nhau:
Passage 1 cung cấp cái nhìn tổng quan về các ứng dụng của AI trong cảnh sát, phù hợp với độ khó Easy, giúp bạn làm quen với chủ đề và từ vựng cơ bản.
Passage 2 đi sâu hơn vào cuộc tranh luận về cân bằng giữa an ninh và quyền riêng tư, với độ khó Medium, yêu cầu kỹ năng phân tích và paraphrase tốt hơn.
Passage 3 phân tích sâu sắc các khía cạnh chính trị-xã hội phức tạp, với độ khó Hard, thử thách khả năng hiểu các khái niệm trừu tượng và lập luận học thuật của bạn.
Bộ đề này đã cung cấp cho bạn 40 câu hỏi đa dạng với 7 dạng câu hỏi khác nhau, giúp bạn rèn luyện toàn diện các kỹ năng cần thiết cho bài thi thật. Đáp án chi tiết kèm giải thích về vị trí, từ khóa và cách paraphrase sẽ giúp bạn hiểu rõ cách tiếp cận từng dạng câu hỏi.
Hơn 40 từ vựng quan trọng được phân loại theo từng passage, đi kèm phiên âm, nghĩa tiếng Việt, ví dụ và collocations sẽ giúp bạn xây dựng vốn từ vựng học thuật cần thiết không chỉ cho phần Reading mà còn cho cả bài thi IELTS nói chung.
Hãy thực hành bộ đề này trong điều kiện giống thi thật: 60 phút, không tra từ điển, và tự đánh giá kết quả của mình. Sau đó, dành thời gian đọc kỹ phần giải thích để hiểu rõ tại sao đáp án đúng và học cách paraphrase hiệu quả. Đây chính là cách tốt nhất để cải thiện band điểm Reading của bạn!