The role of pseudonymisation on training large language models: an interview with Simon Dobnik and colleagues
Simon Dobnik, a core researcher from the Mormor Karl project from University of Gothenburg working on computational semantics and language modelling, argues in an interview with the GU Journal that we need a better understanding of the data that large language models are trained on and evaluate them for information they capture. In including more representative data across different areas of human activity psedonymisation plays an important role. Read the article here: in Swedish and in English.