Dataset Viewer
Auto-converted to Parquet
audio_filepath
stringlengths
219
227
audio
audioduration (s)
0.37
30
duration
float32
0.37
30
text
stringlengths
4
702
whisper_transcript
stringlengths
21
1.15k
text_norm
stringlengths
4
698
whisper_transcript_norm
stringlengths
4
671
wer
float32
0
20
prev_text
stringlengths
0
702
prev_whisper_transcript
stringlengths
0
1.36k
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:62624564:958444
29.950001
HAS HAS ANYONE ACTUALLY LOOKED AT THE JAVA CODE FOR THE HUH HMM YEAH I THINK SO YEAH I I DON'T KNOW ABOUT THE SEARCH FUNCTIONALITY THAT MIGHT BE ONLINE DEPENDS HOW IT'S GONNA WORK YEAH MM-HMM YEAH THAT MAKES SENSE HMM HMM YEAH YOU JUST CONCATENATE THEM TOGETHER HMM YEAH IT JUST MEANS IT LOADS ON DEMAND IT ONLY LOADS WHEN IT NEEDS A PARTICULAR TYPE OF FILE LIKE WHEN IT'S BEING ACCESSED YEAH I THINK THAT'S THE IDEA IT JUST LOADS THE PARTICULAR ONES IT NEEDS BUT IF YOU WERE DOING A SEARCH OVER THE WHOLE CORPUS YOU'D HAVE TO LOAD THEM ALL HMM
<|0.00|> Has anyone actually looked at the Java code for the AMX?<|5.00|><|5.38|> Yeah, I think so.<|6.22|><|6.22|> Yeah, I don't know about the search functionality.<|8.28|><|8.28|> That might be online.<|10.20|><|10.20|> Depends how it's gonna work.<|11.92|><|11.92|> Yeah, that makes sense.<|13.22|><|13.22|> Yeah, you just concatenate them together.<|15.60|><|15.60|> It just means it loads on demand.<|17.42|><|17.42|> It only loads when it needs a particular type of file,<|22.24|><|22.24|> like when it's being accessed.<|23.40|><|23.40|> Yeah, I think that's the idea.<|24.40|><|24.40|> It just loads the particular ones it needs.<|26.96|><|26.96|> But if you were doing a search over the whole corpus,<|28.66|><|28.66|> you'd have to load them all.<|29.96|>
has has anyone actually looked at the java code for the huh yeah i think so yeah i i do not know about the search functionality that might be online depends how it is going to work yeah yeah that makes sense yeah you just concatenate them together yeah it just means it loads on demand it only loads when it needs a particular type of file like when it is being accessed yeah i think that is the idea it just loads the particular ones it needs but if you were doing a search over the whole corpus you would have to load them all
has anyone actually looked at the java code for the amx yeah i think so yeah i do not know about the search functionality that might be online depends how it is going to work yeah that makes sense yeah you just concatenate them together it just means it loads on demand it only loads when it needs a particular type of file like when it is being accessed yeah i think that is the idea it just loads the particular ones it needs but if you were doing a search over the whole corpus you would have to load them all
4.716981
OKAY DOES ANYONE WANT TO SEE UH STEVE'S FEEDBACK FROM THE SPECIFICATION RIGHT NOT REALLY UM JUST WHAT HE'S TALKING ABOUT LIKE DUPLICATION OF EFFORT AND LIKE DUPLICATION OF EFFORT AND STUFF AND UM YEAH HE WAS SAYING THAT WE SHOULD MAYBE UH THINK ABOUT HAVING A PROTOTYPE FOR WEEK SIX WHICH IS NEXT WEEK YEAH SO WE SHOULD PROBABLY PRIORITIZE OUR PACKAGES MM YEAH YEAH HMM
<|0.00|> Does anyone want to see Steve's feedback from the specification?<|4.80|><|4.80|> Not really, just what he was talking about, like duplication of effort and stuff.<|11.20|><|11.20|> And saying that we should maybe think about having a prototype for week six, which is next week.<|21.00|><|21.00|> So we should probably prioritise our packages.<|28.34|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:89537982:840044
26.25
SO I GUESS UM IF I'M GONNA BE SEGMENTING IT WITH A L. C. SEG THEN THAT'S LIKE SAME FORMAT I'D WANT TO UM PUT IT BACK OUT IN SO IT'D BE EQUIVALENT WELL LIKE THE INTEGRATION WHAT DO YOU MEAN INTEGRATION HMM I DON'T KNOW I DON'T THINK ANYONE'S BEEN ALLOCATED TO DO THAT YET YEAH YEAH YEAH DEFINITELY HMM YEAH YEAH IT C COULD BE DIFFICULT YEAH YEAH WELL I GUESS THE IMPORTANT THING IS TO GET THE CRUCIAL M MODULES BUILT YE YEAH YEAH AND THEN
<|0.00|> So I guess if I'm going to be segmenting it with a LCSEG,<|4.84|><|4.84|> then that's the same format I'd want to put it back out in.<|9.02|><|9.02|> So it'd be equivalent.<|10.72|><|10.72|> Well, like the integration.<|12.40|><|12.40|> What do you mean, integration?<|13.88|><|13.88|> Don't know.<|14.18|><|14.18|> I don't think anyone's been allocated to do that yet.<|16.08|><|16.08|> Yeah, yeah, definitely.<|17.90|><|17.90|> Yeah, it could be difficult.<|19.92|><|19.92|> Well, I guess the important thing is to get the crucial modules built.<|26.26|>
so i guess if i am going to be segmenting it with a l c seg then that is like same format i would want to put it back out in so it would be equivalent well like the integration what do you mean integration i do not know i do not think anyone has been allocated to do that yet yeah yeah yeah definitely yeah yeah it c could be difficult yeah yeah well i guess the important thing is to get the crucial m modules built ye yeah yeah and then
so i guess if i am going to be segmenting it with a lcseg then that is the same format i would want to put it back out in so it would be equivalent well like the integration what do you mean integration do not know i do not think anyone has been allocated to do that yet yeah yeah definitely yeah it could be difficult well i guess the important thing is to get the crucial modules built
17.204302
YEAH I'VE HAD A B I'VE HAD A LOOK AT THE THE TOPIC SEGMENTS HOW IT'S STORED AND THEN YEAH TH THOSE ARE FEW PER MEETING AND IT UM WELL IT GIVES A TIME STAMP AND INSIDE EACH ONE THERE'S UH THE ACTUAL LIKE UTTERANCE SEGMENTS AND THE LIST OF THEM THAT OCCURRED AND THEY'RE ALL NUMBERED UM SO THAT'S WHERE THAT'S STORED YEAH
<|0.00|> Yeah I've had a look at the topic segments, how it's stored.<|5.00|><|5.00|> And yeah there's a few per meeting and it gives a timestamp and inside each one there's<|13.96|><|13.96|> the actual utterance segments, the list of them that occurred and they're all numbered<|18.38|><|18.38|> so that's how that's stored.<|20.10|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:95466168:921962
28.809938
SO DID DID YOU HAVE TO COMBINE THEM ALL AND AND THEN RE ORDER THEM YEAH YE YEAH C RIGHT YEAH SO THAT'S APPROACH UM WELL I WAS GOING TO DO SO YEAH WE MAY AS WELL COLLABORATE IN THE WORD FILES I'M NOT SURE I WHAT YOU MEAN OH RIGHT HMM HMM MM I THOUGHT THEY WERE LOCAL TO TH A PARTICULAR MEETING HMM MM IS THERE ANYTHING ELSE WE SHOULD DISCUSS YEAH SHOULD WE NOT HAVE LIKE A GROUP DIRECTORY OR SOMETHING WHERE WE CAN PUT ALL OUR CODE IN AND THAT KINDA THING HMM I'VE GOTTEN MM HARDLY ANY HMM YEAH WE CAN ASK STEVE IF UM WE CAN GET SPACE
<|0.00|> So did you have to combine them all and then reorder them?<|2.90|><|2.90|> Yeah, yeah, right.<|4.82|><|4.82|> Yeah, so that's approach and, well, I was going to do so.<|8.94|><|8.94|> We may as well collaborate.<|10.34|><|10.34|> And the word files, I'm not sure what you mean.<|12.64|><|12.64|> All right, I thought they were local<|14.54|><|14.54|> to a particular meeting.<|16.64|><|16.64|> Is there anything else we should discuss?<|18.26|><|18.26|> Yeah, should we not have like a group directory<|20.06|><|20.06|> or something where we can put all our code in<|22.28|><|22.28|> and that kind of thing?<|23.78|><|23.78|> Mm, I've got hardly any.<|25.96|><|25.96|> Yeah, we can ask Steve if we can get space<|28.80|>
so did did you have to combine them all and and then re order them yeah ye yeah c right yeah so that is approach well i was going to do so yeah we may as well collaborate in the word files i am not sure i what you mean 0 right i thought they were local to th a particular meeting is there anything else we should discuss yeah should we not have like a group directory or something where we can put all our code in and that kinda thing i have gotten hardly any yeah we can ask steve if we can get space
so did you have to combine them all and then reorder them yeah yeah right yeah so that is approach and well i was going to do so we may as well collaborate and the word files i am not sure what you mean all right i thought they were local to a particular meeting is there anything else we should discuss yeah should we not have like a group directory or something where we can put all our code in and that kind of thing i have got hardly any yeah we can ask steve if we can get space
14.018692
YEAH AND THEN WE'LL MAYBE HAVE TO PRIORITIZE SOMEBODY INTO JUST INTEGRATING IT MM-HMM YEAH I THINK SO UH YEAH HMM YEAH YEAH JASMINE I THOUGHT YOU JUST SAID THAT YOU'D UH LOOKED AT EXTRACTING THE TEXT YEAH SO YOU YOU SAID YOU DID IT IN PYTHON YEAH YEAH DID YOU USE UH B THE X. L. UH X. M. L. PARSER IN PYTHON RIGHT YEAH SOUNDS PRETTY GOOD SO UM 'CAUSE YEAH I WAS HAVING A LOOK IN IT A LOOK AT IT AS WELL AND I NOTICED THE UM THE SPEAKERS ARE ALL IN THAT SEPARATE FILE
<|0.00|> Yeah, and then we'll maybe have to prioritise somebody into integrating it.<|5.36|><|5.36|> Yeah, I think so.<|6.96|><|6.96|> Yeah, Yasmin, I thought you said that you'd looked at extracting the text.<|11.92|><|11.92|> Yeah.<|12.72|><|12.72|> So you said you did it in Python, yeah?<|15.12|><|15.12|> Yeah, did you use the XML parser in Python?<|18.64|><|18.64|> Right.<|19.68|><|19.68|> Sounds pretty good.<|20.72|><|20.72|> So, because, yeah, I was having a look at it as well,<|24.92|><|24.92|> and I noticed the speakers, they're all in a separate file.<|29.92|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:24826536:783404
24.48
I I THOUGHT WE WOULD JUST HAVE LIKE UM ONE BIG SUMMARY UM WITH ALL THE UH DIFFERENT IMPORTANCE LEVELS UM DISPLAYED ON IT AND DEPENDING ON WHAT OUR UM ZOOM LEVEL IS WE JUST DISPLAY A PART OF IT AND WE WOULD HAVE ONE VERY BIG THING OFF LINE AND FROM THAT WE WOULD JUST SELECT WHAT WE ARE DISPLAYING YES
<|0.00|> I thought we would just have like one big summary with only different importance levels displayed on it<|10.80|><|10.80|> and depending on what our zoom level is we just display a part of it.<|16.88|><|16.88|> I mean we would have one very big thing offline and from that we would just select what we are displaying.<|24.48|>
i i thought we would just have like one big summary with all the different importance levels displayed on it and depending on what our zoom level is we just display a part of it and we would have one very big thing off line and from that we would just select what we are displaying yes
i thought we would just have like one big summary with only different importance levels displayed on it and depending on what our zoom level is we just display a part of it i mean we would have one very big thing offline and from that we would just select what we are displaying
14.035088
YEAH YEAH HE SUGGESTED THAT WE COULD HAVE AN UH INITIAL PROTOTYPE I KNOW I'D B I'D BE SURPRISED IF WE CAN GET ANYTHING WORKING BY NEXT WEEK ALRIGHT YEAH YEAH I MEAN IF WE JUST WANT TO HAVE UM SOME DATA FOR THE USER FACE COULD EVEN BE RANDOM DATA UH MM MM YEAH I'M HMM YES HMM YES HMM I'M NOT SO SURE
<|0.00|> Yeah, it suggested that we could have an initial prototype.<|3.28|><|3.66|> I know, I'd be surprised if we can get anything working by next week.<|9.10|><|9.66|> I mean, if we just want to have some data for the user face,<|16.54|><|16.64|> it could even be random data.<|18.02|><|18.02|> Yeah, I'm not so sure.<|24.24|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:29640044:891244
27.85
ALL THE SOUND FILES ALL IN UM I R I I'M GETTING QUITE LOST UM AT THE MOMENT BECAUSE UM W WHAT'S UM OUR DIFFERENCE BETWEEN THE UM SE UM UH THE IMPORTANCE MEASURE AND THE SKIMMING I MEAN DO WE DO BOTH OR IS IT THE SAME THING OKAY SO BUT WHEN WHEN WE TALK ABOUT SUMMARIES YOU TALK ABOUT THIS UH ABO ABOUT SKIMMING AND NOT ABOUT YEAH
<|0.00|> all the sound files.<|2.00|><|4.92|> I'm getting quite lost at the moment because what's our difference between the<|11.32|><|11.32|> importance measure and the skimming?<|18.00|><|18.00|> I mean, do we do both or is it the same thing?<|21.00|><|21.00|> Okay, so, but when we talk about summaries, we talk about this, about skimming and not about...<|27.84|>
all the sound files all in i r i i am getting quite lost at the moment because w what is our difference between the se the importance measure and the skimming i mean do we do both or is it the same thing okay so but when when we talk about summaries you talk about this abo about skimming and not about yeah
all the sound files i am getting quite lost at the moment because what is our difference between the importance measure and the skimming i mean do we do both or is it the same thing okay so but when we talk about summaries we talk about this about skimming and not about .
18.75
SO FOR EXAMPLE YOU WOULD UM GIVE A HIGH VALUE TO THOSE UM SEQUENCES YOU WANT TO DISPLAY IN THE MEETING SERIES SUMMARY AND YOU JUST CUT OFF THAT WAS WHAT I SH I THOUGHT YEAH I THOUGHT BUT I THINK THE M DIFFERENCE MIGHT BE THAT WE WANT JUST WANT TO HAVE UM THE WORDS AND THAT'S NOT SO MUCH WHAT HE MEANT WITH NOT POSSIBLY LOADING EVERYTHING WAS THAT YOU M UM LOAD ALL THE UH ANNOTATION STUFF
<|0.00|> So for example you would give a high value to those sequences you want to display in the meeting series summary.<|9.00|><|9.00|> And you just cut off, that was what I thought.<|14.00|><|14.00|> But I think the difference might be that we would just want to have the words and that's not so much what he meant<|23.00|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:20321626:796204
24.879999
FOR EXAMPLE IT LOADS ALL THE UTTERANCES AND SO ON BUT IT DOESN'T LOAD UM THE DISCOURSE ACTS AND FOR EXAMPLE NOT THE AND WHAT'S WHAT ELSE THERE NOT THE SUMMARIES IT ONLY LOADS THOSE ON DEMAND Y YOU MEAN THAT YOU UM BASICALLY SPLIT UP TH THE BIG THING INTO UM DIFFERENT SUMMARIES
<|0.00|> For example it loads all the utterances and so on, but it doesn't load the discourse acts and...<|7.04|><|8.56|> for example not the...<|10.24|><|11.04|> what else was there?<|13.04|><|13.04|> Not the summaries.<|15.04|><|15.04|> It only loads those on demand.<|17.04|><|17.04|> You mean that you basically split up the big thing into different summaries?<|24.88|>
for example it loads all the utterances and so on but it does not load the discourse acts and for example not the and what is what else there not the summaries it only loads those on demand y you mean that you basically split up th the big thing into different summaries
for example it loads all the utterances and so on but it does not load the discourse acts and for example not the what else was there not the summaries it only loads those on demand you mean that you basically split up the big thing into different summaries
11.320755
YEAH RIGHT ISN'T THAT THE SKIMMING ISN'T THAT THE SKIMMING YEAH BUT IT USE THE SAME DATA YEAH A AND YEAH I THINK WE ALSO THOUGHT ABOUT COMBINING THAT MEASURE WITH UM THE MEASURES I GET FROM UM S UH HOT SPOTS AND SO ON SO THAT WOULD ALSO BE ON UTTERANCE LEVEL I THINK I THINK YES SURE YES YES RIGHT OOPS IT DOES SO I DEFINE BASELINE AND WHAT IT LOADS
<|0.00|> Yeah, but isn't that the skimming?<|2.20|><|2.20|> Isn't that the skimming?<|3.28|><|3.28|> Yeah, but it uses the same data.<|6.56|><|6.56|> Yeah, I think we also thought about combining that measure<|10.16|><|10.16|> with the measures I get from hotspots and so on.<|16.12|><|16.12|> So that would also be on a trans level, I think.<|21.32|><|21.32|> Yes, sure.<|22.08|><|22.08|> Yes.<|22.44|><|22.44|> Yes.<|23.08|><|23.08|> Perhaps it has a defined baseline on what it loads.<|26.80|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:90378100:744044
23.25
UM AND BASICALLY IT'S UH WORDS THAT ARE UTTERED IN A SEQUENCE WITHOUT PAUSES BUT SOMETIMES UM HOWEVER THERE ARE UM SHORT PAUSES IN IT AND THEY'RE INDICATED BY SQUARE BRACKETS PAUSE OR SOMETHING IN THE DATA UM SOMETI UH BUT UH THE ANNOTATORS DECIDED WHAT WAS ONE SEGMENT AND WHAT WASN'T I THINK SO
<|0.00|> Basically it's words that are added in a sequence without pauses,<|6.24|><|6.24|> but sometimes, however, there are short pauses in it<|10.40|><|10.40|> and they are indicated by square brackets, pause or something in the data.<|16.04|><|16.04|> Sometimes the annotators decided what was one segment and what wasn't.<|22.16|><|22.16|> I think so.<|23.26|>
and basically it is words that are uttered in a sequence without pauses but sometimes however there are short pauses in it and they are indicated by square brackets pause or something in the data someti but the annotators decided what was one segment and what was not i think so
basically it is words that are added in a sequence without pauses but sometimes however there are short pauses in it and they are indicated by square brackets pause or something in the data sometimes the annotators decided what was one segment and what was not i think so
7.843137
THERE THERE ARE TIME STAMPS UM FOR WELL SEGMENTS UM AND FOR TH UM SEGMENTS IS FOR EXAMPLE WHEN WHEN YOU LOOK AT THE DATA WHAT IS DISPLAYED IN ONE LINE WHAT WHEN WHEN YOU LOOK AT IT IN THE HMM I THINK SO ISN'T UM FOR EX UM I I COMPARED IT WITH WHAT I DID FOR THE PAUSE UM DURATION EXTRACTION
<|0.00|> There are timestamps for the segments.<|4.00|><|6.00|> And for segments, for example, when you look at the data,<|10.80|><|10.80|> what is displayed in one line?<|12.60|><|12.60|> When you look at it in the...<|15.60|><|15.60|> Hm? I think so.<|18.10|><|18.10|> For example, I compared it with what I did for the pause duration extraction.<|24.74|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:112956206:880684
27.52
YEAH BUT UM I THINK FOR SOME ANNOTATIONS UM AN UTTERA CA UTTERANCE CAN HAVE SEVERAL UM TYPES FOR EXAMPLE FOR THE DIALOGUE ACTS AND SO ON OKAY YEAH THAT SHOULD BE FOR YEAH SHOULD BE YEAH YES BUT THAT'S YEAH EVERYTHING THAT'S A WORD HAS A STI TIME STAMP THAT'S AT THE END THAT'S AT THE END I THINK HER TIME YEAH MAYBE DIDN'T HAVE A LOOK AT OUR MEETINGS
<|0.00|> Yeah, but I think for some annotations,<|6.00|><|6.00|> an adjunct can have several types.<|11.00|><|11.00|> For example, for the dialogue acts and so on.<|12.96|><|12.96|> OK, yeah, that should be for, yeah.<|16.04|><|16.04|> Should be, yeah.<|16.68|><|16.68|> Yes, but that's, yeah, everything that's word<|19.92|><|19.92|> has a timestamp.<|21.32|><|21.32|> That's at the end.<|23.24|><|23.24|> That's at the end, I think.<|25.36|><|25.36|> Yeah, maybe you didn't have a look at our meetings.<|27.52|>
yeah but i think for some annotations an uttera ca utterance can have several types for example for the dialog acts and so on okay yeah that should be for yeah should be yeah yes but that is yeah everything that is a word has a sti time stamp that is at the end that is at the end i think her time yeah maybe did not have a look at our meetings
yeah but i think for some annotations an adjunct can have several types for example for the dialog acts and so on ok yeah that should be for yeah should be yeah yes but that is yeah everything that is word has a timestamp that is at the end that is at the end i think yeah maybe you did not have a look at our meetings
15.068493
UM AND BASICALLY IT'S UH WORDS THAT ARE UTTERED IN A SEQUENCE WITHOUT PAUSES BUT SOMETIMES UM HOWEVER THERE ARE UM SHORT PAUSES IN IT AND THEY'RE INDICATED BY SQUARE BRACKETS PAUSE OR SOMETHING IN THE DATA UM SOMETI UH BUT UH THE ANNOTATORS DECIDED WHAT WAS ONE SEGMENT AND WHAT WASN'T I THINK SO
<|0.00|> Basically it's words that are added in a sequence without pauses,<|6.24|><|6.24|> but sometimes, however, there are short pauses in it<|10.40|><|10.40|> and they are indicated by square brackets, pause or something in the data.<|16.04|><|16.04|> Sometimes the annotators decided what was one segment and what wasn't.<|22.16|><|22.16|> I think so.<|23.26|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:48333398:954604
29.83
UH I I THINK IT WOULDN'T AS IT OCCURS I MEAN IT WOULD BE IT OCCURS IN EVERY MEETING SO AND I THINK IT EVEN HAS UH ITS OWN ANNOTATION LIKE DIGITS OR SOMETHING SO THAT SHOULD BE REALLY EASY TO CUT OUT YEAH I'M SURE AH IT'S JUST TO TEST THE SYSTEM I THINK SO MM THEY HAVE TO READ NUMBERS FROM UH I DIDN'T HAVE A LOOK AT THAT SO THEY MM-HMM
<|0.00|> I think it wouldn't as it occurs, I mean it would be the case in every meeting.<|8.34|><|8.34|> And I think it even has its own annotation like digits or something so<|14.50|><|14.50|> that should be really easy to cut out. Yeah, I'm sure. It's just to test the system I think.<|23.58|><|23.58|> They have to read numbers. I didn't have a look at that so.<|29.82|>
i i think it would not as it occurs i mean it would be it occurs in every meeting so and i think it even has its own annotation like digits or something so that should be really easy to cut out yeah i am sure ah it is just to test the system i think so they have to read numbers from i did not have a look at that so they
i think it would not as it occurs i mean it would be the case in every meeting and i think it even has its own annotation like digits or something so that should be really easy to cut out yeah i am sure it is just to test the system i think they have to read numbers i did not have a look at that so
10.958904
YEAH BUT UM I THINK FOR SOME ANNOTATIONS UM AN UTTERA CA UTTERANCE CAN HAVE SEVERAL UM TYPES FOR EXAMPLE FOR THE DIALOGUE ACTS AND SO ON OKAY YEAH THAT SHOULD BE FOR YEAH SHOULD BE YEAH YES BUT THAT'S YEAH EVERYTHING THAT'S A WORD HAS A STI TIME STAMP THAT'S AT THE END THAT'S AT THE END I THINK HER TIME YEAH MAYBE DIDN'T HAVE A LOOK AT OUR MEETINGS
<|0.00|> Yeah, but I think for some annotations,<|6.00|><|6.00|> an adjunct can have several types.<|11.00|><|11.00|> For example, for the dialogue acts and so on.<|12.96|><|12.96|> OK, yeah, that should be for, yeah.<|16.04|><|16.04|> Should be, yeah.<|16.68|><|16.68|> Yes, but that's, yeah, everything that's word<|19.92|><|19.92|> has a timestamp.<|21.32|><|21.32|> That's at the end.<|23.24|><|23.24|> That's at the end, I think.<|25.36|><|25.36|> Yeah, maybe you didn't have a look at our meetings.<|27.52|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:2671468:845804
26.43
BECAUSE I UM I IN MY OUTLINE I TALKED ABOUT UM USING THE UM DISCOURSE ACTS FIRST AND UM THEN IN THE CHUNKS OF TEXT I FOUND LOOKING FOR WORD PATTERNS AND SO ON SO UM I WOULD FOR EXAMPLE NEED THE UM MOST FREQ UM FREQUENT WORDS
<|0.00|> because in my outline I talked about using the discourse<|8.58|><|8.58|> acts first.<|10.34|><|10.34|> And then in the chunks of text, I<|14.70|><|14.70|> found looking for word patterns and so on.<|18.24|><|18.24|> So I would, for example, need the most frequent words.<|26.44|>
because i i in my outline i talked about using the discourse acts 1st and then in the chunks of text i found looking for word patterns and so on so i would for example need the most freq frequent words
because in my outline i talked about using the discourse acts 1st and then in the chunks of text i found looking for word patterns and so on so i would for example need the most frequent words
7.317073
UH TH YEAH 'KAY UM I JUST UM WONDERED SO WHO'S UH THEN DOING UM THE FREQUENCIES ON ON THE WORDS BECAUSE I'M I THINK I WILL ALSO UM I COULD ALSO MAKE USE OF IT UM FOR THE AGREEMENT AND DISAGREEMENT THING
<|0.00|> I just wondered, so who's then doing the frequencies on the words?<|12.80|><|12.80|> Because I think I could also make use of it for the agreement and disagreement thing.<|21.60|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:97267042:841004
26.280001
I THINK IT WOULD BE YOU KNOW L AS AS BIG AT AS THE HOT SPOT ANNOTATION THINGS THAT'S QUITE SMALL YEAH THAT'S SOME UTTERANCES YES YEAH YEAH SO I WOULD PROBABLY JUST CONCATENATE ALL MY UM TEXT CHUNKS AND THEN LET'S SAY M I WILL RUN OVER IT YES YES DEFINITELY YEAH RIGHT YE M
<|0.00|> I think it would be as big as the hotspot annotation things.<|9.54|><|9.54|> That's quite small, yeah, that's some utterances.<|12.06|><|12.06|> Yes, yeah, yeah.<|13.54|><|13.54|> So I would probably just concatenate all my text chunks<|18.34|><|18.34|> and then let's say I will run over it.<|21.24|><|21.24|> Yes.<|21.64|><|21.64|> Yes, definitely.<|24.20|><|24.20|> Yeah, right.<|26.28|>
i think it would be you know l as as big at as the hot spot annotation things that is quite small yeah that is some utterances yes yeah yeah so i would probably just concatenate all my text chunks and then let us say m i will run over it yes yes definitely yeah right ye m
i think it would be as big as the hotspot annotation things that is quite small yeah that is some utterances yes yeah yeah so i would probably just concatenate all my text chunks and then let us say i will run over it yes yes definitely yeah right
17.241379
SO IF YOU CUT OFF ALL THAT I'D WON'T BE USE OR YEAH I I BUT I NEED IT FOR MY CHUNKS THEN I WOULD YOU KNOW YEAH BUT I'D UH I WOULD LIKE TO LOOK AT THE FREQUENCY OF WORDS IN MY UM IN THE REGIONS OF TEXT I FOUND OUT TO BE INTERESTING SO I WOULDN'T NEED IT IT IT WOULD HAVE TO BE RE CALCULATED ONLY FOR MY SEGMENTS HUH UH UH MM
<|0.00|> So if you cut off all that, it wouldn't be...<|5.00|><|5.00|> Yeah, but I need it for my chunks then.<|8.36|><|8.36|> Yeah, but I would like to look at the frequency of words<|12.16|><|12.16|> in the regions of text I found out to be interesting.<|20.76|><|20.76|> So I wouldn't need it.<|22.16|><|22.16|> It would have to be recalculated only for my segments.<|28.44|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:4359064:255724
7.99
UM JASMINE UH UM WHAT IS UM THE TEXT YOU'RE EXTRACTING UH LOOKING LIKE THEN AT THE END
<|0.00|> Yasmin, what is the text you're extracting looking like then at the end?<|8.00|>
jasmine what is the text you are extracting looking like then at the end
yasmin what is the text you are extracting looking like then at the end
7.142857
I THINK IT WOULD BE YOU KNOW L AS AS BIG AT AS THE HOT SPOT ANNOTATION THINGS THAT'S QUITE SMALL YEAH THAT'S SOME UTTERANCES YES YEAH YEAH SO I WOULD PROBABLY JUST CONCATENATE ALL MY UM TEXT CHUNKS AND THEN LET'S SAY M I WILL RUN OVER IT YES YES DEFINITELY YEAH RIGHT YE M
<|0.00|> I think it would be as big as the hotspot annotation things.<|9.54|><|9.54|> That's quite small, yeah, that's some utterances.<|12.06|><|12.06|> Yes, yeah, yeah.<|13.54|><|13.54|> So I would probably just concatenate all my text chunks<|18.34|><|18.34|> and then let's say I will run over it.<|21.24|><|21.24|> Yes.<|21.64|><|21.64|> Yes, definitely.<|24.20|><|24.20|> Yeah, right.<|26.28|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:63583082:955884
29.870001
BECAUSE UM I I THINK IT'S ACTUALLY VERY SIMILAR TO WHAT I DID FOR MY UM SPEAKER UM UH EXTRACTION AND I THINK I WOULD CH PERHAPS HAVE TO CHANGE TWO LINES OF CODES TO GET YOU UM FOR EACH MEETING A FILE THAT SAYS FR FROM UM THIS MILLISECOND TO THIS MILLISECOND THERE WAS THIS SEQUENCE OF WORDS AND SO ON SO THAT'S JUST CHANGING TWO LINES OF CODE AND IT WOULD GIVE YOU THAT SO
<|0.00|> Because I think it's actually very similar<|3.26|><|3.26|> to what I did for my speaker extraction.<|7.26|><|8.90|> And I think I would perhaps have to change<|13.52|><|13.52|> two lines of code to get you for each meeting<|17.44|><|17.44|> a file that says from this millisecond to this millisecond<|22.00|><|22.00|> there was this sequence of words and so on.<|24.66|><|24.66|> So that's just changing two lines of code<|28.66|><|28.66|> and it would give you that.<|29.86|>
because i i think it is actually very similar to what i did for my speaker extraction and i think i would ch perhaps have to change 2 lines of codes to get you for each meeting a file that says fr from this millisecond to this millisecond there was this sequence of words and so on so that is just changing 2 lines of code and it would give you that so
because i think it is actually very similar to what i did for my speaker extraction and i think i would perhaps have to change 2 lines of code to get you for each meeting a file that says from this millisecond to this millisecond there was this sequence of words and so on so that is just changing 2 lines of code and it would give you that
6.849315
UM JASMINE UH UM WHAT IS UM THE TEXT YOU'RE EXTRACTING UH LOOKING LIKE THEN AT THE END
<|0.00|> Yasmin, what is the text you're extracting looking like then at the end?<|8.00|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:7317936:883244
27.6
UM YEAH SO FAR I EXTRACTED UM THE DURA DURATIONS BUT IT'S FROM THE WORDS FILE SO I COULD JUST UM CONTATENATE CONCATENATE UM THE WORDS INSTEAD OF THE DURATIONS AND IT SHOULD I MEAN SHOULD BE VERY STRAIGHT FORWARD I CAN TRY TO DO IT AND SEND IT TO YOU PE AND YOU HAVE A LOOK AT IT WILL IT MAKE SENSE FOR WHAT YOU WANT
<|0.00|> Yeah, so far I extracted the durations, but it's from the words file, so I could just concatenate the words instead of the durations.<|15.84|><|17.06|> And it should be very straightforward.<|20.42|><|20.82|> I can try to do it and send it to you, and you have a look at it, how it makes sense for what you want.<|27.60|>
yeah so far i extracted the dura durations but it is from the words file so i could just contatenate concatenate the words instead of the durations and it should i mean should be very straight forward i can try to do it and send it to you pe and you have a look at it will it make sense for what you want
yeah so far i extracted the durations but it is from the words file so i could just concatenate the words instead of the durations and it should be very straightforward i can try to do it and send it to you and you have a look at it how it makes sense for what you want
15.625
BECAUSE UM I I THINK IT'S ACTUALLY VERY SIMILAR TO WHAT I DID FOR MY UM SPEAKER UM UH EXTRACTION AND I THINK I WOULD CH PERHAPS HAVE TO CHANGE TWO LINES OF CODES TO GET YOU UM FOR EACH MEETING A FILE THAT SAYS FR FROM UM THIS MILLISECOND TO THIS MILLISECOND THERE WAS THIS SEQUENCE OF WORDS AND SO ON SO THAT'S JUST CHANGING TWO LINES OF CODE AND IT WOULD GIVE YOU THAT SO
<|0.00|> Because I think it's actually very similar<|3.26|><|3.26|> to what I did for my speaker extraction.<|7.26|><|8.90|> And I think I would perhaps have to change<|13.52|><|13.52|> two lines of code to get you for each meeting<|17.44|><|17.44|> a file that says from this millisecond to this millisecond<|22.00|><|22.00|> there was this sequence of words and so on.<|24.66|><|24.66|> So that's just changing two lines of code<|28.66|><|28.66|> and it would give you that.<|29.86|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:10588488:887084
27.719999
YEAH UH P I MEAN IT I JUST LET IT RUN OVER ALL THE FILES SO YES I JUST ORDERED UH I ORDERED ACCORDING TO THE UM STARTING TIMES OF THE UTTERANCES WHAT DO YOU MEAN BY DIFFE YEAH I MEAN T I I HAVE ONE WHAT I GIVE YOU WOULD BE ONE FILE FOR EACH MEETING YEAH NOT FOR EACH MEETING SERIES I DIDN'T DO THAT YET YEAH ONE GROUP YEAH YEAH I MEAN THERE'S ONE SERIES THAT HAS JUST ONE MEETING YES
<|0.00|> Yeah, I mean, I just let it run over all the files.<|3.92|><|4.28|> Yes.<|4.52|><|5.40|> I just ordered.<|6.08|><|6.24|> I ordered according to the starting times of the utterances.<|10.44|><|10.64|> What do you mean?<|11.30|><|11.44|> Yeah, I mean, I have one.<|13.46|><|13.90|> What I give you would be one file for each meeting.<|17.08|><|17.26|> Yeah, not for each meeting series.<|18.62|><|19.16|> I didn't do that yet.<|20.02|><|20.40|> Yeah, one group.<|21.16|><|21.44|> Yeah, I mean, there's one series that has just one meeting.<|24.00|><|26.68|> Yes.<|27.16|><|27.16|> Yes.<|27.72|>
yeah p i mean it i just let it run over all the files so yes i just ordered i ordered according to the starting times of the utterances what do you mean by diffe yeah i mean t i i have one what i give you would be one file for each meeting yeah not for each meeting series i did not do that yet yeah one group yeah yeah i mean there is one series that has just one meeting yes
yeah i mean i just let it run over all the files yes i just ordered i ordered according to the starting times of the utterances what do you mean yeah i mean i have one what i give you would be one file for each meeting yeah not for each meeting series i did not do that yet yeah one group yeah i mean there is one series that has just one meeting yes yes
10.843373
UM YEAH SO FAR I EXTRACTED UM THE DURA DURATIONS BUT IT'S FROM THE WORDS FILE SO I COULD JUST UM CONTATENATE CONCATENATE UM THE WORDS INSTEAD OF THE DURATIONS AND IT SHOULD I MEAN SHOULD BE VERY STRAIGHT FORWARD I CAN TRY TO DO IT AND SEND IT TO YOU PE AND YOU HAVE A LOOK AT IT WILL IT MAKE SENSE FOR WHAT YOU WANT
<|0.00|> Yeah, so far I extracted the durations, but it's from the words file, so I could just concatenate the words instead of the durations.<|15.84|><|17.06|> And it should be very straightforward.<|20.42|><|20.82|> I can try to do it and send it to you, and you have a look at it, how it makes sense for what you want.<|27.60|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:50938552:933804
29.18
BUT I I MEAN AS UM THE START UH START TIMES UM START FOR EACH MEETING AT ZERO YOU COULD JUST PROBABLY JUST UM ADD THE UM FINAL SECOND TIME TO THE NEXT MEETING AND SO ON AND JUST PUT IT ALL TOGETHER BUT THEN WE WOULD HAVE TO CHANGE UM THE INFORMATION ABOUT WHO ON WHICH CHANNEL IT WAS SET UM TO BY WHICH PERSON IT WAS SET
<|0.00|> But I mean, as the start times start for each meeting at zero,<|8.58|><|8.58|> you could just probably just add the final second time<|14.98|><|14.98|> to the next meeting and so on and just put it all together.<|19.20|><|19.20|> But then we would have to change the information about who,<|23.94|><|23.94|> on which channel it was set, by which person it was set,<|29.18|>
but i i mean as the start start times start for each meeting at 0 you could just probably just add the final 2nd time to the next meeting and so on and just put it all together but then we would have to change the information about who on which channel it was set to by which person it was set
but i mean as the start times start for each meeting at 0 you could just probably just add the final 2nd time to the next meeting and so on and just put it all together but then we would have to change the information about who on which channel it was set by which person it was set
4.83871
UM THE YOU YOU THE DATA IS OF THE FORM YOU HAVE UM THREE IDENTIFICATION LETTER SO B. E. D. OR B. B. D. OR SOMETHING AND THAT'S ALWAYS THE SAME GROUP AND THEN AFTER THAT THERE'S UM A NUMBER LIKE O. O. ONE O. O. TWO SO IT'S A YEAH BUT THAT'S THAT'S REALLY QUITE EASY TO SEE BECAUSE THEY'RE NAMED YES
<|0.00|> The data is of the form you have three identification letters, so BED or BBD or something, and that's always the same group.<|10.30|><|10.30|> And then after that there's a number like 001, 002.<|15.30|><|15.30|> So that's really quite easy to see because they're named.<|22.10|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:117394876:933804
29.18
AND THAT IS ACTUALLY STORED IN ANOTHER X. M. L. DOCUMENT YEAH I W WOULD THEN JUST NOT PRINT OUT THE UM START AND END TIMES NO IT'S FOR EVERY SINGLE WORD OR FOR EVERY SINGLE UTTERANCE YEAH THAT DEPENDS ON WHAT YOU WANT YEAH BUT I DO IT WITH PERL IT'S JUST STRING MANIPULATION SO I WOULD I MEAN I WOULD JUST SURE NO I DIDN'T DO A SEA NO AND YOU WOULD WANT THAT ALL IN ONE FILE FOR ALL THE CORPUS OR
<|0.00|> and that is actually stored in another XML document.<|3.08|><|3.08|> Yeah, I would then just not print out the start and end times.<|10.16|><|10.16|> No, it's for every single word.<|11.88|><|11.88|> Or for every single utterance, yeah.<|14.12|><|14.12|> That depends on what you want.<|15.32|><|15.32|> I do it with Perl, it's just string manipulation, so I would...<|19.52|><|19.52|> I mean, I would just...<|20.92|><|20.92|> Sure.<|21.80|><|21.80|> No, I didn't use it.<|23.68|><|23.68|> And you would want that all in one file for all the corpus?<|29.18|>
and that is actually stored in another x m l document yeah i w would then just not print out the start and end times no it is for every single word or for every single utterance yeah that depends on what you want yeah but i do it with perl it is just string manipulation so i would i mean i would just sure no i did not do a sea no and you would want that all in one file for all the corpus or
and that is actually stored in another xml document yeah i would then just not print out the start and end times no it is for every single word or for every single utterance yeah that depends on what you want i do it with perl it is just string manipulation so i would i mean i would just sure no i did not use it and you would want that all in one file for all the corpus
12.643678
BUT I I MEAN AS UM THE START UH START TIMES UM START FOR EACH MEETING AT ZERO YOU COULD JUST PROBABLY JUST UM ADD THE UM FINAL SECOND TIME TO THE NEXT MEETING AND SO ON AND JUST PUT IT ALL TOGETHER BUT THEN WE WOULD HAVE TO CHANGE UM THE INFORMATION ABOUT WHO ON WHICH CHANNEL IT WAS SET UM TO BY WHICH PERSON IT WAS SET
<|0.00|> But I mean, as the start times start for each meeting at zero,<|8.58|><|8.58|> you could just probably just add the final second time<|14.98|><|14.98|> to the next meeting and so on and just put it all together.<|19.20|><|19.20|> But then we would have to change the information about who,<|23.94|><|23.94|> on which channel it was set, by which person it was set,<|29.18|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:53690266:958444
29.950001
FOR THE SERIES YEAH I CAN DIRECTLY PUT IT INTO UH JUST LIKE SO UH ONLY WORDS UM PER MEETING SERIES UH-HUH YES YEAH THEY WILL JUST I WILL JUST TAKE I WOULD UH TAKE OVER THE NAMES THEY HAVE ANYWAY YEAH YEAH YEAH ONE SERIES HAS THE UM SAME THREE STARTING LETTERS SO SO ONLY WORDS AND WORDS AND TIMES AND YOU YEAH YOU WANT IT ORDERED OKAY OKAY ANYBODY
<|0.00|> For the series. Yeah, I can directly put it into...<|4.00|><|4.00|> So, only words per meeting series.<|10.24|><|10.24|> Yeah, I will just take over the names they have anyway.<|16.40|><|16.40|> Yeah, one series has the same three starting letters.<|21.36|><|21.36|> So only words and words and times.<|24.36|><|24.36|> And you? Yeah, you want it ordered.<|26.24|><|26.24|> Okay. Okay, anybody?<|29.96|>
for the series yeah i can directly put it into just like so only words per meeting series huh yes yeah they will just i will just take i would take over the names they have anyway yeah yeah yeah one series has the same 3 starting letters so so only words and words and times and you yeah you want it ordered okay okay anybody
for the series yeah i can directly put it into so only words per meeting series yeah i will just take over the names they have anyway yeah one series has the same 3 starting letters so only words and words and times and you yeah you want it ordered okay okay anybody
19.69697
AND THAT IS ACTUALLY STORED IN ANOTHER X. M. L. DOCUMENT YEAH I W WOULD THEN JUST NOT PRINT OUT THE UM START AND END TIMES NO IT'S FOR EVERY SINGLE WORD OR FOR EVERY SINGLE UTTERANCE YEAH THAT DEPENDS ON WHAT YOU WANT YEAH BUT I DO IT WITH PERL IT'S JUST STRING MANIPULATION SO I WOULD I MEAN I WOULD JUST SURE NO I DIDN'T DO A SEA NO AND YOU WOULD WANT THAT ALL IN ONE FILE FOR ALL THE CORPUS OR
<|0.00|> and that is actually stored in another XML document.<|3.08|><|3.08|> Yeah, I would then just not print out the start and end times.<|10.16|><|10.16|> No, it's for every single word.<|11.88|><|11.88|> Or for every single utterance, yeah.<|14.12|><|14.12|> That depends on what you want.<|15.32|><|15.32|> I do it with Perl, it's just string manipulation, so I would...<|19.52|><|19.52|> I mean, I would just...<|20.92|><|20.92|> Sure.<|21.80|><|21.80|> No, I didn't use it.<|23.68|><|23.68|> And you would want that all in one file for all the corpus?<|29.18|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:27217610:835884
26.120001
THAT'S WHAT I'M GUESSING THAT'S YOU KNOW UM WHAT I BECAUSE NINE MEGA BYTE IS WHAT I GOT FOR WHEN I SAID FOR EVERY UM UTTERANCE THIS IS GOES FROM THERE TO THERE AND TAKES TAKES SECONDS OH YEAH I MEAN I'M IT DOING IT FOR ALL OF IT DOESN'T MATTER YEAH I MEAN I HOPE IT WILL BE THE SAME FOR THE WORDS IT'S JUST WHAT I I MM-HMM MM
<|0.00|> That's what I'm guessing, that's, you know, what I, because 9 megabytes is what I got for<|6.60|><|6.60|> when I said for every utterance this goes from there to there and takes seconds.<|13.30|><|13.30|> Oh, yeah, I mean, I'm doing it for all of it.<|18.30|><|18.30|> Doesn't matter.<|19.20|><|19.20|> Yeah, I mean, I hope it will be the same for the words.<|23.10|><|23.10|> It's just what I...<|26.12|>
that is what i am guessing that is you know what i because 9 mega byte is what i got for when i said for every utterance this is goes from there to there and takes takes seconds 0 yeah i mean i am it doing it for all of it does not matter yeah i mean i hope it will be the same for the words it is just what i i
that is what i am guessing that is you know what i because 9 megabytes is what i got for when i said for every utterance this goes from there to there and takes seconds 0 yeah i mean i am doing it for all of it does not matter yeah i mean i hope it will be the same for the words it is just what i .
8.219178
UM ORD BASE DOT TIMES YEAH AND DO YOU WANT YEAH SOMETIMES THEY'RE CONTAINED IN ONE ANOTHER SO JUST AFTER TH MM-HMM 'KAY ORDERED ONLY WORDS UM AND I THINK UM FOR ALL THE CORPUS IT'S JUST FROM I KNOW FROM OTHER TIMES IT'S UM NINE MEGAMI BYTE TO HAVE I MEAN SHOULD BE SHOULD BE SIMILAR TO HAVE THE WORDS SHOULD BE NA UM ALL THE WORDS TOGETHER UM FOR ALL THE MEETINGS
<|0.00|> Or at best at times.<|1.48|><|1.48|> Yeah, and do you want...<|2.64|><|2.64|> Yeah, sometimes they're contained in one another.<|5.84|><|5.84|> So just after...<|7.16|><|8.64|> Okay, ordered, only words.<|11.16|><|11.16|> And I think for all the corpus,<|13.80|><|13.80|> it's just from what I know from my times,<|15.96|><|15.96|> it's nine megabyte to have,<|19.64|><|19.64|> I mean, should be similar to have the words,<|22.76|><|24.76|> all the words together for all the meetings.<|27.48|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:3517346:841644
26.299999
SO SO UM I WILL PROBABLY SEND UM JUST ONE FILE OF THE FIRST MEETING UM TO ALL THOSE WHO NEED IT SO THAT YOU CAN HAVE A LOOK WHETHER THAT'S WHAT YOU WANT YEAH I MEAN IF IT'S JUST FOR ONE MEETING IT'S REALLY NOT TOO BIG YEAH WHAT DO WE HAVE TO DEMONSTRATE THE BASIC WORD IMPORTANCE IS OFF LINE AS WELL THE COMBINED MEASURE MIGHT NOT BE IF WE WANT TO WAIT WHAT THE USER HAS TYPED IN INTO THE SEARCH YEAH
<|0.00|> So I probably send just one file of the first meeting to all those who need it, so that<|10.24|><|10.24|> you can have a look whether that's what you want.<|12.32|><|12.32|> Yeah, I mean if it's just for one meeting it's really not too big.<|15.62|><|15.62|> What do we have to demonstrate?<|16.78|><|16.78|> The basic word importance is offline as well.<|19.92|><|19.92|> The combined measure might not be if we want to wait what a user has typed into the search.<|26.30|>
so so i will probably send just one file of the 1st meeting to all those who need it so that you can have a look whether that is what you want yeah i mean if it is just for one meeting it is really not too big yeah what do we have to demonstrate the basic word importance is off line as well the combined measure might not be if we want to wait what the user has typed in into the search yeah
so i probably send just one file of the 1st meeting to all those who need it so that you can have a look whether that is what you want yeah i mean if it is just for one meeting it is really not too big what do we have to demonstrate the basic word importance is offline as well the combined measure might not be if we want to wait what a user has typed into the search
9.411765
THAT'S WHAT I'M GUESSING THAT'S YOU KNOW UM WHAT I BECAUSE NINE MEGA BYTE IS WHAT I GOT FOR WHEN I SAID FOR EVERY UM UTTERANCE THIS IS GOES FROM THERE TO THERE AND TAKES TAKES SECONDS OH YEAH I MEAN I'M IT DOING IT FOR ALL OF IT DOESN'T MATTER YEAH I MEAN I HOPE IT WILL BE THE SAME FOR THE WORDS IT'S JUST WHAT I I MM-HMM MM
<|0.00|> That's what I'm guessing, that's, you know, what I, because 9 megabytes is what I got for<|6.60|><|6.60|> when I said for every utterance this goes from there to there and takes seconds.<|13.30|><|13.30|> Oh, yeah, I mean, I'm doing it for all of it.<|18.30|><|18.30|> Doesn't matter.<|19.20|><|19.20|> Yeah, I mean, I hope it will be the same for the words.<|23.10|><|23.10|> It's just what I...<|26.12|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:99909956:823084
25.719999
AND THERE ARE QUITE UNIMPORTANT WORDS IN THERE BUT QUITE IMPORTANT WORDS AS WELL I THINK WE SHOULD JUST DISREGARD THE THE OKAY ALRIGHT YEAH BUT THERE IS NO I. D. FOR AN UTTERANCE I THINK IT'S JUST FOR INDIVIDUAL WORDS SO HOW DO WE DO THAT THEN WE FOR UTTERANCES AS WELL I THINK IT'S JUST FOR ONE WORD SO WE HAVE TO YEAH
<|0.00|> and there are quite unimportant words in there, but quite important words as well.<|5.24|><|5.24|> I think we should just disregard that.<|9.32|><|9.32|> Okay.<|11.28|><|11.28|> But there is no ID for an utterance, I think it's just for individual words.<|16.00|><|16.00|> So how do we do that then?<|18.60|><|18.60|> For utterances as well.<|21.48|><|21.48|> I think it's just for one word, so we have to...<|25.72|>
and there are quite unimportant words in there but quite important words as well i think we should just disregard the the okay alright yeah but there is no i d for an utterance i think it is just for individual words so how do we do that then we for utterances as well i think it is just for one word so we have to yeah
and there are quite unimportant words in there but quite important words as well i think we should just disregard that okay but there is no id for an utterance i think it is just for individual words so how do we do that then for utterances as well i think it is just for one word so we have to .
11.940298
I'M NOT QUITE SO WHAT IT DID YOU WANT TO DO IT I YOU JUST WANTED TO ASSIGN UH I THOUGHT ABOUT WORDS MM MM OKAY YEAH BUT HOW ABOUT THOSE WORDS WHICH DON'T CARRY ANY MEANING AT ALL THE UM AND UHS AND SOMETHING LIKE THAT BECAUSE IF WE IF WE AVERAGE AVERAGE OVER OVER A WHOLE UTTERANCE ALL THE WORDS
<|0.00|> Not quite. So what did you want to do? You just wanted to assign...<|5.12|><|5.12|> I thought about words.<|7.96|><|7.96|> But how about those words which don't carry any meaning at all?<|12.48|><|12.48|> The um and er and something like that.<|14.90|><|14.90|> Because if we average over whole utterance all the words,<|21.34|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:100733114:955244
29.85
UH I'M NOT QUITE SURE I HAVE ONLY SEEN THAT THE UH THE INDIVIDUAL WORDS HAVE GOT AN I. D. YEAH YOU ALWAYS COULD HAVE A LOOK AT THE TIME STAMPS AND THEN TAKE THE ONES THAT UH BELONG TOGETHER TO FORM AN UTTERANCE YEAH IF THEY ARE ALREADY THERE'S IT'S EASY BUT IT WOULD BE POSSIBLE UH YEAH OKAY YOU S UH YOU SAID YOU ARE CURRENTLY IN UH IMPLEMENTING THE IDEA WHAT EXACTLY ARE YOU COMPUTING OKAY OKAY MM-HMM MM-HMM
<|0.00|> I'm not quite sure. I have only seen that the individual words have got an ID.<|5.84|><|5.84|> You always could have a look at the timestamps and then take the ones that belong together to form an utterance.<|14.34|><|14.34|> Yeah, if they are already there, it's easy, but it would be possible.<|18.50|><|18.50|> Yeah. Okay.<|21.34|><|21.34|> You said you're currently implementing the idea. What exactly are you computing?<|29.84|>
i am not quite sure i have only seen that the the individual words have got an i d yeah you always could have a look at the time stamps and then take the ones that belong together to form an utterance yeah if they are already there is it is easy but it would be possible yeah okay you s you said you are currently in implementing the idea what exactly are you computing okay okay
i am not quite sure i have only seen that the individual words have got an id you always could have a look at the timestamps and then take the ones that belong together to form an utterance yeah if they are already there it is easy but it would be possible yeah okay you said you are currently implementing the idea what exactly are you computing
15.584415
AND THERE ARE QUITE UNIMPORTANT WORDS IN THERE BUT QUITE IMPORTANT WORDS AS WELL I THINK WE SHOULD JUST DISREGARD THE THE OKAY ALRIGHT YEAH BUT THERE IS NO I. D. FOR AN UTTERANCE I THINK IT'S JUST FOR INDIVIDUAL WORDS SO HOW DO WE DO THAT THEN WE FOR UTTERANCES AS WELL I THINK IT'S JUST FOR ONE WORD SO WE HAVE TO YEAH
<|0.00|> and there are quite unimportant words in there, but quite important words as well.<|5.24|><|5.24|> I think we should just disregard that.<|9.32|><|9.32|> Okay.<|11.28|><|11.28|> But there is no ID for an utterance, I think it's just for individual words.<|16.00|><|16.00|> So how do we do that then?<|18.60|><|18.60|> For utterances as well.<|21.48|><|21.48|> I think it's just for one word, so we have to...<|25.72|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:78871926:956844
29.9
YEAH I W I W I WOULD NEED THE RAW TEXT PRETTY SOON BECAUSE I HAVE TO FIND OUT UM HOW I HAVE TO PUT THE SEGMENTS INTO BINS AND THEN YEAH NO THAT'S NOT NECESSARY YES I DID BUT UM I'VE ONLY JUST GOT THE NOTES I HAVE TO STILL HAVE UH TO ORDER EVERYTHING BY THE TIME AND YEAH I THINK IT'S QUITE EASY AFTER THE YEAH YEAH SO UH MM-HMM
<|0.00|> I would need the raw text pretty soon because I have to find out how I have to put the segments into bins.<|11.64|><|14.04|> That's not necessary.<|15.38|><|15.56|> Yes, I did.<|16.26|><|16.86|> But I've only just got the notes.<|19.58|><|19.70|> I have to still order everything by the time.<|23.76|><|25.06|> And yeah, I think it's quite easy.<|26.98|><|26.98|> Yeah, yeah, yeah.<|29.90|>
yeah i w i w i would need the raw text pretty soon because i have to find out how i have to put the segments into bins and then yeah no that is not necessary yes i did but i have only just got the notes i have to still have to order everything by the time and yeah i think it is quite easy after the yeah yeah so
i would need the raw text pretty soon because i have to find out how i have to put the segments into bins that is not necessary yes i did but i have only just got the notes i have to still order everything by the time and yeah i think it is quite easy yeah yeah yeah
19.718309
UH I'M NOT QUITE SURE I HAVE ONLY SEEN THAT THE UH THE INDIVIDUAL WORDS HAVE GOT AN I. D. YEAH YOU ALWAYS COULD HAVE A LOOK AT THE TIME STAMPS AND THEN TAKE THE ONES THAT UH BELONG TOGETHER TO FORM AN UTTERANCE YEAH IF THEY ARE ALREADY THERE'S IT'S EASY BUT IT WOULD BE POSSIBLE UH YEAH OKAY YOU S UH YOU SAID YOU ARE CURRENTLY IN UH IMPLEMENTING THE IDEA WHAT EXACTLY ARE YOU COMPUTING OKAY OKAY MM-HMM MM-HMM
<|0.00|> I'm not quite sure. I have only seen that the individual words have got an ID.<|5.84|><|5.84|> You always could have a look at the timestamps and then take the ones that belong together to form an utterance.<|14.34|><|14.34|> Yeah, if they are already there, it's easy, but it would be possible.<|18.50|><|18.50|> Yeah. Okay.<|21.34|><|21.34|> You said you're currently implementing the idea. What exactly are you computing?<|29.84|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:60737288:954924
29.84
YEAH B I UH W THAT'S WHAT I WAS UH THOUGHT THAT YOU JUST COMBINE THEM AND THEN ORDER THE TIME STAMPS ACCORDINGLY OKAY UM WHAT I FOUND OUT WAS THAT THERE ARE QUITE A LOT OF THINGS WITHOUT WITHOUT S TIME STAMPS IN THE BEGINNING YEAH AND UH X. M. L. FILES YEAH THAT'S JUST AN I. D. OR SOMETHING I DON'T KNOW JUST NUMBERS YES BUT WHAT ARE THE OTHER THINGS THAT'S UH SOME KIND OF NUMBER F MAYBE THE FILE NUMBER OR SOMETHING THAT IS IN THE BEGINNING WHAT IS THAT
<|0.00|> Yeah, that's what I thought.<|2.42|><|2.90|> That you just combine them and then order the timestamps accordingly.<|7.00|><|7.22|> Okay.<|7.50|><|8.48|> What I found out was that there are quite a lot of things without timestamps in the beginning.<|15.50|><|15.82|> Yeah, XML files.<|17.80|><|17.96|> Yeah, that's just an ID or something.<|19.86|><|20.00|> I don't know, just numbers.<|21.08|><|21.32|> Yes, but what are the other things that's some kind of number?<|25.16|><|25.82|> Maybe the file number or something that is in the beginning.<|29.04|><|29.34|> What is that?<|29.84|>
yeah b i w that is what i was thought that you just combine them and then order the time stamps accordingly okay what i found out was that there are quite a lot of things without without s time stamps in the beginning yeah and x m l files yeah that is just an i d or something i do not know just numbers yes but what are the other things that is some kind of number f maybe the file number or something that is in the beginning what is that
yeah that is what i thought that you just combine them and then order the timestamps accordingly okay what i found out was that there are quite a lot of things without timestamps in the beginning yeah xml files yeah that is just an id or something i do not know just numbers yes but what are the other things that is some kind of number maybe the file number or something that is in the beginning what is that
18.27957
YEAH I W I W I WOULD NEED THE RAW TEXT PRETTY SOON BECAUSE I HAVE TO FIND OUT UM HOW I HAVE TO PUT THE SEGMENTS INTO BINS AND THEN YEAH NO THAT'S NOT NECESSARY YES I DID BUT UM I'VE ONLY JUST GOT THE NOTES I HAVE TO STILL HAVE UH TO ORDER EVERYTHING BY THE TIME AND YEAH I THINK IT'S QUITE EASY AFTER THE YEAH YEAH SO UH MM-HMM
<|0.00|> I would need the raw text pretty soon because I have to find out how I have to put the segments into bins.<|11.64|><|14.04|> That's not necessary.<|15.38|><|15.56|> Yes, I did.<|16.26|><|16.86|> But I've only just got the notes.<|19.58|><|19.70|> I have to still order everything by the time.<|23.76|><|25.06|> And yeah, I think it's quite easy.<|26.98|><|26.98|> Yeah, yeah, yeah.<|29.90|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:43817608:935404
29.23
IT'S IT'S QUITE STRANGE AND ALSO UM THERE ARE DIFFERENT UM COMBINATIONS OF LETTERS B. R. E. AND SOMETHING LIKE THAT IS IT EVERYTHING ORDERED ARE THE TIME STAMPS GLOBAL OR UH ARE THEY LOCAL AT ANY POINT OKAY YEAH IT'S RAINBOW IT'S UM I THINK IT'S JUST THE DICTIONARY IN THE FIRST PLACE BUT UM NO I HAVE TO BIN IT UP AND SO I WILL ONLY HAVE COUNTS FOR EACH EACH BIN OR SOMETHING
<|0.00|> It's quite strange.<|1.24|><|1.24|> And also there are different combinations of letters,<|7.24|><|7.24|> BRE and something like that.<|9.16|><|9.16|> Is everything ordered, are the timestamps global,<|12.86|><|12.86|> or are they local at any point?<|15.78|><|15.78|> Yeah, it's rainbow.<|18.34|><|18.34|> I think it's just a dictionary in the first place.<|22.24|><|22.24|> No, I have to bin it up.<|24.56|><|24.56|> And so I will only have counts for each bin or something.<|29.24|>
it is it is quite strange and also there are different combinations of letters b r e and something like that is it everything ordered are the time stamps global or are they local at any point okay yeah it is rainbow it is i think it is just the dictionary in the 1st place but no i have to bin it up and so i will only have counts for each each bin or something
it is quite strange and also there are different combinations of letters bre and something like that is everything ordered are the timestamps global or are they local at any point yeah it is rainbow i think it is just a dictionary in the 1st place no i have to bin it up and so i will only have counts for each bin or something
18.421053
DO YOU KNOW UM I THINK THERE ARE QUITE A LOT OF NUMBERS IN THE BEGINNING WHERE N THERE IS NO TIME STAMP FOR THE NUMBERS IT'S THINK THEY SAY UM QUITE A LOT OF NUMBERS AND BEFORE THAT UH UM THERE'S THIS NUMBER WAS IT YEAH THERE I ARE NUMBERS IN THE UM THE W. TAG BUT THERE ARE NO TIME STAMPS YEAH YEAH IN THE BEGINNING AS WELL SOMETIMES I THINK AT LEAST I SAW SOME YEAH YEAH BUT WHAT IT IS IT ACTUALLY THAT NUMBERS OKAY SO BUT THERE ARE NO TIME STAMPS ANNOTATED TO THAT
<|0.00|> I think there are quite a lot of numbers in the beginning when there's no timestamp for the numbers.<|6.16|><|6.16|> I think they say quite a lot of numbers and before that there's just number.<|11.66|><|11.66|> Yeah, there are numbers in the W tag, but there are no timestamps.<|17.54|><|17.54|> Yeah, in the beginning as well sometimes I think.<|21.12|><|21.12|> At least I saw some.<|23.20|><|23.20|> But what is it actually that numbers...<|25.66|><|25.66|> Okay, so there are no timestamps annotated to that.<|29.84|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:75436894:925484
28.92
IT'S BECAUSE UM RAINBOW IS A TEXT CLASSIFICATION SYSTEM AND I THINK IT'S NOT POSSIBLE TO HAVE JUST ONE CLASS THAT'S THE PROBLEM MAYBE WE COULD YEAH SURE YOU SURE WE COULD DO THAT BUT I DON'T THAT MAKES SENSE IF WE NEED JUST FREQUENCIES MAYBE WE SHOULD JUST CALCULATE THEM BY USING PERL OR SOMETHING I DON'T KNOW YEAH IT'S QUITE EASY TO JUST COUNT AND S OR SORT THEM BY UM FREQUENCY
<|0.00|> It's because our rainbow is a tax classification system.<|3.28|><|3.28|> And I think it's not possible to have just one class.<|8.84|><|8.84|> That's the problem.<|9.84|><|9.84|> Maybe we could...<|11.08|><|11.08|> Yeah, sure, you're sure we could do that, but I don't know if that makes sense.<|14.64|><|14.64|> If we need just frequencies, maybe we should just calculate them by using Perl or something.<|23.40|><|23.40|> Yeah, it's quite easy to just count and sort them by our frequency.<|28.92|>
it is because rainbow is a text classification system and i think it is not possible to have just one class that is the problem maybe we could yeah sure you sure we could do that but i do not that makes sense if we need just frequencies maybe we should just calculate them by using perl or something i do not know yeah it is quite easy to just count and s or sort them by frequency
it is because our rainbow is a tax classification system and i think it is not possible to have just one class that is the problem maybe we could yeah sure you are sure we could do that but i do not know if that makes sense if we need just frequencies maybe we should just calculate them by using perl or something yeah it is quite easy to just count and sort them by our frequency
15.384615
IT'S IT'S QUITE STRANGE AND ALSO UM THERE ARE DIFFERENT UM COMBINATIONS OF LETTERS B. R. E. AND SOMETHING LIKE THAT IS IT EVERYTHING ORDERED ARE THE TIME STAMPS GLOBAL OR UH ARE THEY LOCAL AT ANY POINT OKAY YEAH IT'S RAINBOW IT'S UM I THINK IT'S JUST THE DICTIONARY IN THE FIRST PLACE BUT UM NO I HAVE TO BIN IT UP AND SO I WILL ONLY HAVE COUNTS FOR EACH EACH BIN OR SOMETHING
<|0.00|> It's quite strange.<|1.24|><|1.24|> And also there are different combinations of letters,<|7.24|><|7.24|> BRE and something like that.<|9.16|><|9.16|> Is everything ordered, are the timestamps global,<|12.86|><|12.86|> or are they local at any point?<|15.78|><|15.78|> Yeah, it's rainbow.<|18.34|><|18.34|> I think it's just a dictionary in the first place.<|22.24|><|22.24|> No, I have to bin it up.<|24.56|><|24.56|> And so I will only have counts for each bin or something.<|29.24|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:33196034:923244
28.85
JUST USING A PERL SCRIPT IS IT TOO BIG YEAH HMM I DON'T KNOW HOW YOU HOW MANY TERMS YOU CAN HANDLE IN PERL MM YEAH UH I CAN GET ALL THE RAW TEXT BUT IT HAS TO BE ORDERED STILL SO NO IT ISN'T UM IT'S IN WHAT IS IMPLEMENTED IN RAINBOW IS INFORMATION GAIN AND I'M NOT QUITE SURE HOW THEY CALCULATE THAT YEAH UH THAT'S WHAT RAINBOW DOES I THINK YOU J CAN JUST GET PROBABILITIES FOR A CERTAIN WORDS FOR EACH DOCUMENT CERTAIN
<|0.00|> Just using a Perl script.<|2.00|><|2.00|> Is it too big?<|3.00|><|3.00|> Yeah.<|4.00|><|4.00|> I don't know how many terms you can handle in Perl.<|9.20|><|9.20|> I can get all the raw text but it has to be ordered still.<|12.00|><|12.00|> No it doesn't.<|14.00|><|14.00|> It's what is implemented in Rainbird is information gain and I'm not quite sure how they calculate<|20.00|><|20.00|> that.<|21.00|><|21.00|> That's what Rainbird does I think.<|23.00|><|23.00|> You can just get probabilities for certain words for each document.<|28.00|>
just using a perl script is it too big yeah i do not know how you how many terms you can handle in perl yeah i can get all the raw text but it has to be ordered still so no it is not it is in what is implemented in rainbow is information gain and i am not quite sure how they calculate that yeah that is what rainbow does i think you j can just get probabilities for a certain words for each document certain
just using a perl script is it too big yeah i do not know how many terms you can handle in perl i can get all the raw text but it has to be ordered still no it does not it is what is implemented in rainbird is information gain and i am not quite sure how they calculate that that is what rainbird does i think you can just get probabilities for certain words for each document
13.793103
IT'S BECAUSE UM RAINBOW IS A TEXT CLASSIFICATION SYSTEM AND I THINK IT'S NOT POSSIBLE TO HAVE JUST ONE CLASS THAT'S THE PROBLEM MAYBE WE COULD YEAH SURE YOU SURE WE COULD DO THAT BUT I DON'T THAT MAKES SENSE IF WE NEED JUST FREQUENCIES MAYBE WE SHOULD JUST CALCULATE THEM BY USING PERL OR SOMETHING I DON'T KNOW YEAH IT'S QUITE EASY TO JUST COUNT AND S OR SORT THEM BY UM FREQUENCY
<|0.00|> It's because our rainbow is a tax classification system.<|3.28|><|3.28|> And I think it's not possible to have just one class.<|8.84|><|8.84|> That's the problem.<|9.84|><|9.84|> Maybe we could...<|11.08|><|11.08|> Yeah, sure, you're sure we could do that, but I don't know if that makes sense.<|14.64|><|14.64|> If we need just frequencies, maybe we should just calculate them by using Perl or something.<|23.40|><|23.40|> Yeah, it's quite easy to just count and sort them by our frequency.<|28.92|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:119899230:876844
27.4
UM WE WOULD HAVE TO LOOK AT THAT MM-HMM OH YEAH THAT'S WHAT I THOUGHT AS WELL THAT YOU THAT PROBABLY THE THE TOPIC SEGMENT LEVEL IS THE MOST UM INFORMATIVE FOR THE WORDS YEAH THAT'S THE PROBLEM I DON'T KNOW MM-HMM SO SHALL WE SIT TOGETHER TOMORROW THEN AS WELL UH OKAY UM YEAH W WOULD IT BE BEST AT THE MOMENT IT'S IT'S JUST LINES OF MM-HMM UM OKAY
<|0.00|> We would have to look at that.<|4.00|><|4.00|> That's what I thought as well, that probably the topic segment level is the most informative<|13.04|><|13.04|> for the words.<|14.04|><|14.04|> Yeah, that's the problem.<|15.04|><|15.04|> I don't know.<|16.04|><|16.04|> So shall we sit together tomorrow then as well?<|20.04|><|20.04|> Okay.<|21.04|><|21.04|> What would be best?<|22.04|><|22.04|> At the moment it's just lines of...<|27.40|>
we would have to look at that 0 yeah that is what i thought as well that you that probably the the topic segment level is the most informative for the words yeah that is the problem i do not know so shall we sit together tomorrow then as well okay yeah w would it be best at the moment it is it is just lines of okay
we would have to look at that that is what i thought as well that probably the topic segment level is the most informative for the words yeah that is the problem i do not know so shall we sit together tomorrow then as well okay what would be best at the moment it is just lines of .
16.17647
JUST USING A PERL SCRIPT IS IT TOO BIG YEAH HMM I DON'T KNOW HOW YOU HOW MANY TERMS YOU CAN HANDLE IN PERL MM YEAH UH I CAN GET ALL THE RAW TEXT BUT IT HAS TO BE ORDERED STILL SO NO IT ISN'T UM IT'S IN WHAT IS IMPLEMENTED IN RAINBOW IS INFORMATION GAIN AND I'M NOT QUITE SURE HOW THEY CALCULATE THAT YEAH UH THAT'S WHAT RAINBOW DOES I THINK YOU J CAN JUST GET PROBABILITIES FOR A CERTAIN WORDS FOR EACH DOCUMENT CERTAIN
<|0.00|> Just using a Perl script.<|2.00|><|2.00|> Is it too big?<|3.00|><|3.00|> Yeah.<|4.00|><|4.00|> I don't know how many terms you can handle in Perl.<|9.20|><|9.20|> I can get all the raw text but it has to be ordered still.<|12.00|><|12.00|> No it doesn't.<|14.00|><|14.00|> It's what is implemented in Rainbird is information gain and I'm not quite sure how they calculate<|20.00|><|20.00|> that.<|21.00|><|21.00|> That's what Rainbird does I think.<|23.00|><|23.00|> You can just get probabilities for certain words for each document.<|28.00|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:46624042:793004
24.780001
SO UM YOU'D DO YOU EXTRACT THE WORDS THE RAW TEXT AS WELL UH OKAY MM-HMM PRINT OUT OKAY OKAY THAT OKAY SO HAVE WE ALREADY EXTRACTED FROM ALL THE FILES YEAH DID YOU ALSO ORDER MM-HMM HMM HMM OKAY UH I DON'T NEED THE TIMES I JUST NEED THE WORDS BUT UM YEAH IN THE RIGHT ORDER YES YEAH THAT DOESN'T MATTER TOO MUCH I THINK HMM MM-HMM
<|0.00|> So do you extract the words, the raw text as well?<|4.60|><|4.60|> Ok.<|5.20|><|5.20|> Print out?<|5.80|><|5.80|> Ok.<|6.30|><|6.30|> Ok.<|6.80|><|6.80|> Ok.<|7.30|><|7.30|> So have you already extracted from all the files?<|9.80|><|9.80|> Yeah.<|10.30|><|10.30|> Did you also order?<|11.60|><|11.60|> Mhm.<|12.10|><|12.10|> Mhm.<|12.60|><|12.60|> Ok.<|13.10|><|13.10|> I don't need the time, so just need the words, but yeah, in the right order, yes.<|19.40|><|21.90|> Yeah, it doesn't matter too much, I think.<|24.20|><|24.20|> Mhm.<|24.70|>
so you would do you extract the words the raw text as well okay print out okay okay that okay so have we already extracted from all the files yeah did you also order okay i do not need the times i just need the words but yeah in the right order yes yeah that does not matter too much i think
so do you extract the words the raw text as well ok print out ok ok ok so have you already extracted from all the files yeah did you also order ok i do not need the time so just need the words but yeah in the right order yes yeah it does not matter too much i think
19.354839
UM WE WOULD HAVE TO LOOK AT THAT MM-HMM OH YEAH THAT'S WHAT I THOUGHT AS WELL THAT YOU THAT PROBABLY THE THE TOPIC SEGMENT LEVEL IS THE MOST UM INFORMATIVE FOR THE WORDS YEAH THAT'S THE PROBLEM I DON'T KNOW MM-HMM SO SHALL WE SIT TOGETHER TOMORROW THEN AS WELL UH OKAY UM YEAH W WOULD IT BE BEST AT THE MOMENT IT'S IT'S JUST LINES OF MM-HMM UM OKAY
<|0.00|> We would have to look at that.<|4.00|><|4.00|> That's what I thought as well, that probably the topic segment level is the most informative<|13.04|><|13.04|> for the words.<|14.04|><|14.04|> Yeah, that's the problem.<|15.04|><|15.04|> I don't know.<|16.04|><|16.04|> So shall we sit together tomorrow then as well?<|20.04|><|20.04|> Okay.<|21.04|><|21.04|> What would be best?<|22.04|><|22.04|> At the moment it's just lines of...<|27.40|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:41138214:804524
25.139999
BUT IT WON'T BE VERY UM PROCESSOR INTENSIVE OR MEMORY INTENSIVE I DON'T THINK DON'T THINK SO YEAH ARE WE STILL GONNA GO FOR DUMPING IT INTO A DATABASE ARE WE STILL GONNA DUMP IT INTO A DATABASE 'CAUSE IF WE ARE I RECKON WE SHOULD ALL READ OUR CLASSES OUT OF THE DATABASE IT'LL BE SO MUCH EASIER WELL IF WE'RE GONNA DUMP THE PART OF IT INTO A DATABASE ANYWAY WE MIGHT AS WELL DUMP ALL THE FIELDS WE WANT INTO THE DATABASE CALCULATE EVERYTHING FROM THERE
<|0.00|> It won't be very processor intensive or memory intensive I think.<|5.00|><|5.00|> I don't think so.<|6.00|><|6.00|> Are we still going to go for dumping it into a database?<|8.00|><|8.00|> Are we still going to dump it into a database?<|10.68|><|10.68|> Because if we are I reckon we should all read our classes out of the database.<|14.12|><|14.12|> It'll be so much easier.<|15.40|><|15.40|> Or if we're going to dump the part of it into a database anyway, we might as well dump all<|19.94|><|19.94|> the fields we want into the database, calculate everything from there.<|25.14|>
but it will not be very processor intensive or memory intensive i do not think do not think so yeah are we still going to go for dumping it into a database are we still going to dump it into a database cause if we are i reckon we should all read our classes out of the database it will be so much easier well if we are going to dump the part of it into a database anyway we might as well dump all the fields we want into the database calculate everything from there
it will not be very processor intensive or memory intensive i think i do not think so are we still going to go for dumping it into a database are we still going to dump it into a database because if we are i reckon we should all read our classes out of the database it will be so much easier or if we are going to dump the part of it into a database anyway we might as well dump all the fields we want into the database calculate everything from there
7.291667
HOW LONG WOULD IT TAKE TO MAKE THE FREQUENCY COUNTS WITH A JAVA HASH TABLE YEAH NO HOW LONG YOU WOULD HAVE TO PROGRAM SOMETHING OKAY MM BECAUSE IT'S QUITE EASY IN PERL AS WELL IT'S JUST A LINE OF CODE FOR COUNTING ALL THE WORDS AND YEAH IT'S IT'S BY HASHES YEAH YEAH 'KAY I I DRY READ IT THE LAST TIME NEXT WEEK YEAH YEAH NO UH MINE'S GONNA BE MOSTLY USING THE OFF LINE BUT THE ACTUAL STUFF IT'S DOING WILL BE ON LINE
<|0.00|> How long would it take to make the frequency counts with the Java hash table?<|6.50|><|6.50|> Yeah.<|7.00|><|7.00|> Know how long you would have to program something like...<|10.40|><|10.40|> Because it's quite easy in Perl as well.<|12.50|><|12.50|> It's just a line of code for counting all the words and...<|16.50|><|16.50|> Yeah, it's by hashes.<|19.50|><|19.50|> I drive Reddit last time, so I'm fine.<|22.50|><|22.50|> Next week.<|23.50|><|23.50|> Yeah.<|24.50|><|24.50|> Mine's gonna be mostly using the offline, but the actual stuff it's doing will be online.<|29.78|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:66972674:938924
29.34
THEN WE DON'T EVEN HAVE TO WORRY THAT MUCH ABOUT THE UNDERLYING X. M. L. REPRESENTATION WE CAN JUST QUERY IT WELL IF WE'RE GONNA DO THAT WE SHOULD TRY AND STORE EVERYTHING IN IN AN X. M. L. FORMAT AS WELL YEAH YEAH WELL WE DON'T EVEN NEED TO DO THAT 'CAUSE IF WE GOT OUR INFORMATION DENSITY CALCULATED OFF LINE SO ALL WE DO IS TREAT THE WHOLE LOT AS ONE MASSIVE DOCUMENT I MEAN THEY'LL IT'S NOT GONNA BE SO BIG THAT WE CAN'T LOAD IN A INFORMATION DENSITY FOR EVERY UTTERANCE AND WE CAN JUST SUMMARISE BASED ON THAT I THINK YOU CAN DO IT ON LINE
<|0.00|> then we don't even have to worry that much<|2.00|><|2.00|> about the underlying XML representation.<|4.72|><|4.98|> We can just query it.<|5.82|><|5.90|> If we're going to do that,<|6.74|><|6.80|> we should try and store everything in an XML format as well.<|11.00|><|11.16|> Yeah.<|11.26|><|11.46|> Well, we don't even need to do that<|13.24|><|13.24|> because we've got our information density calculated offline.<|15.86|><|16.10|> So all we do is treat the whole lot as one massive document.<|19.54|><|19.72|> I mean, it's not going to be so big<|21.10|><|21.10|> that we can't load in an information density for every utterance.<|25.68|><|26.26|> I mean, just summarise based on that.<|27.92|><|27.92|> I think you can do it online.<|29.34|>
then we do not even have to worry that much about the underlying x m l representation we can just query it well if we are going to do that we should try and store everything in in an x m l format as well yeah yeah well we do not even need to do that cause if we got our information density calculated off line so all we do is treat the whole lot as one massive document i mean they will it is not going to be so big that we can not load in a information density for every utterance and we can just summarize based on that i think you can do it on line
then we do not even have to worry that much about the underlying xml representation we can just query it if we are going to do that we should try and store everything in an xml format as well yeah well we do not even need to do that because we have got our information density calculated offline so all we do is treat the whole lot as one massive document i mean it is not going to be so big that we can not load in an information density for every utterance i mean just summarize based on that i think you can do it online
18.487394
BUT IT WON'T BE VERY UM PROCESSOR INTENSIVE OR MEMORY INTENSIVE I DON'T THINK DON'T THINK SO YEAH ARE WE STILL GONNA GO FOR DUMPING IT INTO A DATABASE ARE WE STILL GONNA DUMP IT INTO A DATABASE 'CAUSE IF WE ARE I RECKON WE SHOULD ALL READ OUR CLASSES OUT OF THE DATABASE IT'LL BE SO MUCH EASIER WELL IF WE'RE GONNA DUMP THE PART OF IT INTO A DATABASE ANYWAY WE MIGHT AS WELL DUMP ALL THE FIELDS WE WANT INTO THE DATABASE CALCULATE EVERYTHING FROM THERE
<|0.00|> It won't be very processor intensive or memory intensive I think.<|5.00|><|5.00|> I don't think so.<|6.00|><|6.00|> Are we still going to go for dumping it into a database?<|8.00|><|8.00|> Are we still going to dump it into a database?<|10.68|><|10.68|> Because if we are I reckon we should all read our classes out of the database.<|14.12|><|14.12|> It'll be so much easier.<|15.40|><|15.40|> Or if we're going to dump the part of it into a database anyway, we might as well dump all<|19.94|><|19.94|> the fields we want into the database, calculate everything from there.<|25.14|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:73856178:759404
23.73
I DON'T THINK THERE'S REALLY MUCH POINT IN DOING LIKE THAT WHEN IT'S JUST GONNA FEED OFF IN THE END THE INFORMATION DENSITY MEASURE BASICALLY AND THAT'S ALL CALCULATED OFF LINE SO WHAT YOU'RE REALLY DOING IS SORTING A LIST IS THE P COMPUTATIONALLY HARD PART OF IT WELL LIKE THE IDEAS WE'RE CALCULATING ARE INFORMATION DENSITY ALL OFF LINE FIRST FOR EVERY UTTERANCE IN THE WHOLE CORPUS RIGHT
<|0.00|> I don't think there's really much point in doing that when it's just going to feed off in the end<|4.46|><|4.46|> the information density measure<|7.64|><|7.64|> basically and that's all calculated offline so all you're really doing is sorting a list<|13.48|><|13.48|> it's the computationally hard part of it. Well like the idea is we're calculating our<|18.24|><|18.24|> information density all offline first for every utterance<|22.20|><|22.20|> in the whole corpus right<|23.74|>
i do not think there is really much point in doing like that when it is just going to feed off in the end the information density measure basically and that is all calculated off line so what you are really doing is sorting a list is the p computationally hard part of it well like the ideas we are calculating are information density all off line 1st for every utterance in the whole corpus right
i do not think there is really much point in doing that when it is just going to feed off in the end the information density measure basically and that is all calculated offline so all you are really doing is sorting a list it is the computationally hard part of it well like the idea is we are calculating our information density all offline 1st for every utterance in the whole corpus right
14.473684
THEN WE DON'T EVEN HAVE TO WORRY THAT MUCH ABOUT THE UNDERLYING X. M. L. REPRESENTATION WE CAN JUST QUERY IT WELL IF WE'RE GONNA DO THAT WE SHOULD TRY AND STORE EVERYTHING IN IN AN X. M. L. FORMAT AS WELL YEAH YEAH WELL WE DON'T EVEN NEED TO DO THAT 'CAUSE IF WE GOT OUR INFORMATION DENSITY CALCULATED OFF LINE SO ALL WE DO IS TREAT THE WHOLE LOT AS ONE MASSIVE DOCUMENT I MEAN THEY'LL IT'S NOT GONNA BE SO BIG THAT WE CAN'T LOAD IN A INFORMATION DENSITY FOR EVERY UTTERANCE AND WE CAN JUST SUMMARISE BASED ON THAT I THINK YOU CAN DO IT ON LINE
<|0.00|> then we don't even have to worry that much<|2.00|><|2.00|> about the underlying XML representation.<|4.72|><|4.98|> We can just query it.<|5.82|><|5.90|> If we're going to do that,<|6.74|><|6.80|> we should try and store everything in an XML format as well.<|11.00|><|11.16|> Yeah.<|11.26|><|11.46|> Well, we don't even need to do that<|13.24|><|13.24|> because we've got our information density calculated offline.<|15.86|><|16.10|> So all we do is treat the whole lot as one massive document.<|19.54|><|19.72|> I mean, it's not going to be so big<|21.10|><|21.10|> that we can't load in an information density for every utterance.<|25.68|><|26.26|> I mean, just summarise based on that.<|27.92|><|27.92|> I think you can do it online.<|29.34|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:88638344:899564
28.110001
SO WHAT YOU DO IS YOU SAY IF YOU'RE LOOKING AT A SERIES OF MEETINGS YOU JUST SAY WELL OUR WHOLE DOCUMENT COMPRISES OF ALL THESE STUCK TOGETHER AND THEN ALL YOU HAVE TO DO IS SORT THEM BY J INFORMATION DENSITY LIKE MAYBE WEIGHTED WITH THE SEARCH TERMS AND THEN EXTRACT THEM I DON'T THINK IT'S TOO SLOW TO DO ON LINE TO BE HONEST IS THAT YEAH WELL ON THE UTTERANCE LEVEL I WAS THINKING SO THE UTTERANCES WITH THE HIGHEST LIKE MEAN INFORMATION DENSITY
<|0.00|> So all you do is you say, if you're looking at a series of meetings, you just say, well,<|4.12|><|4.12|> our whole document comprises of all these stuck together.<|8.28|><|8.28|> And then all you have to do is sort them by information density, like maybe weighted with<|14.44|><|14.44|> the search terms and then extract them.<|16.94|><|16.94|> I don't think it's too slow to do online, to be honest.<|20.72|><|20.72|> Is that well on the utterance level, I was thinking.<|24.02|><|24.02|> So the utterances with the highest like mean information density.<|28.12|>
so what you do is you say if you are looking at a series of meetings you just say well our whole document comprises of all these stuck together and then all you have to do is sort them by j information density like maybe weighted with the search terms and then extract them i do not think it is too slow to do on line to be honest is that yeah well on the utterance level i was thinking so the utterances with the highest like mean information density
so all you do is you say if you are looking at a series of meetings you just say well our whole document comprises of all these stuck together and then all you have to do is sort them by information density like maybe weighted with the search terms and then extract them i do not think it is too slow to do online to be honest is that well on the utterance level i was thinking so the utterances with the highest like mean information density
5.555555
I DON'T THINK THERE'S REALLY MUCH POINT IN DOING LIKE THAT WHEN IT'S JUST GONNA FEED OFF IN THE END THE INFORMATION DENSITY MEASURE BASICALLY AND THAT'S ALL CALCULATED OFF LINE SO WHAT YOU'RE REALLY DOING IS SORTING A LIST IS THE P COMPUTATIONALLY HARD PART OF IT WELL LIKE THE IDEAS WE'RE CALCULATING ARE INFORMATION DENSITY ALL OFF LINE FIRST FOR EVERY UTTERANCE IN THE WHOLE CORPUS RIGHT
<|0.00|> I don't think there's really much point in doing that when it's just going to feed off in the end<|4.46|><|4.46|> the information density measure<|7.64|><|7.64|> basically and that's all calculated offline so all you're really doing is sorting a list<|13.48|><|13.48|> it's the computationally hard part of it. Well like the idea is we're calculating our<|18.24|><|18.24|> information density all offline first for every utterance<|22.20|><|22.20|> in the whole corpus right<|23.74|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:28742646:897324
28.040001
WELL THE TROUBLE WITH DOING IT ON THE WORD LEVEL IS IF YOU WANT THE AUDIO TO SYNCH UP YOU'VE GOT NO WAY OF GETTING IN AND EXTRACTING JUST THAT WORD I MEAN IT'S IMPOSSIBLE FOR EVERY SINGLE WORD OH OKAY YEAH I DON'T THINK THAT WILL DO IT WE'LL HAVE TO BUFFER IT WELL THE SKIMMING'S GONNA USE THE IMPORTANCE BUT LIKE AT FIRST IT'S JUST GONNA BE I. D. F. WELL MOSTLY SKIMMING YEAH YEAH WELL THE NICE THING ABOUT THAT IS IT WILL AUTOMATICALLY BE IN SENTENCES WELL MORE OR LESS SO IT WILL MAKE MORE SENSE AND IF YOU GET JUST EXTRACT WORDS YEAH I SEE IT
<|0.00|> Well, the trouble with doing it on the word level is if you want the audio to sync up,<|3.12|><|3.16|> you've got no way of getting in and extracting just that word.<|5.86|><|6.62|> I mean, it's impossible for every single word.<|8.66|><|9.32|> Oh, okay.<|9.94|><|10.08|> Yeah.<|10.26|><|10.32|> I don't think the player will do it.<|11.82|><|11.88|> We'll have to buffer it.<|12.74|><|12.80|> Well, the skimming's going to use the importance.<|14.74|><|14.96|> But, like, at first, it's just going to be idea.<|17.24|><|17.36|> Well, mostly skimming, yeah.<|18.56|><|18.62|> Yeah.<|18.80|><|18.80|> Well, the nice thing about that is it will automatically be in sentences.<|21.98|><|22.42|> Well, more or less.<|23.26|><|24.50|> So it will make more sense.<|25.68|><|25.78|> And if you just extract words, yeah.<|27.42|><|27.68|> That's it.<|28.04|>
well the trouble with doing it on the word level is if you want the audio to synch up you have got no way of getting in and extracting just that word i mean it is impossible for every single word 0 okay yeah i do not think that will do it we will have to buffer it well the skimming is going to use the importance but like at 1st it is just going to be i d f well mostly skimming yeah yeah well the nice thing about that is it will automatically be in sentences well more or less so it will make more sense and if you get just extract words yeah i see it
well the trouble with doing it on the word level is if you want the audio to sync up you have got no way of getting in and extracting just that word i mean it is impossible for every single word 0 okay yeah i do not think the player will do it we will have to buffer it well the skimming is going to use the importance but like at 1st it is just going to be idea well mostly skimming yeah yeah well the nice thing about that is it will automatically be in sentences well more or less so it will make more sense and if you just extract words yeah that is it
7.563025
SO WHAT YOU DO IS YOU SAY IF YOU'RE LOOKING AT A SERIES OF MEETINGS YOU JUST SAY WELL OUR WHOLE DOCUMENT COMPRISES OF ALL THESE STUCK TOGETHER AND THEN ALL YOU HAVE TO DO IS SORT THEM BY J INFORMATION DENSITY LIKE MAYBE WEIGHTED WITH THE SEARCH TERMS AND THEN EXTRACT THEM I DON'T THINK IT'S TOO SLOW TO DO ON LINE TO BE HONEST IS THAT YEAH WELL ON THE UTTERANCE LEVEL I WAS THINKING SO THE UTTERANCES WITH THE HIGHEST LIKE MEAN INFORMATION DENSITY
<|0.00|> So all you do is you say, if you're looking at a series of meetings, you just say, well,<|4.12|><|4.12|> our whole document comprises of all these stuck together.<|8.28|><|8.28|> And then all you have to do is sort them by information density, like maybe weighted with<|14.44|><|14.44|> the search terms and then extract them.<|16.94|><|16.94|> I don't think it's too slow to do online, to be honest.<|20.72|><|20.72|> Is that well on the utterance level, I was thinking.<|24.02|><|24.02|> So the utterances with the highest like mean information density.<|28.12|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:92008416:919724
28.74
BUT IT'LL NEED TO BE CALCULATED AT WORD LEVEL THOUGH BECAUSE OTHERWISE THERE WON'T BE ENOUGH OCCURRENCES OF THE TERMS TO MAKE ANY MEANINGFUL SENSE YEAH YEAH I RECKON YOU CAN JUST MEAN IT OVER THE SENTENCE I THINK WE SHOULD FILTER THEM MAYBE WE SHOULD HAVE LIKE UM A CUT OFF SO IT A W WORD ONLY GETS A VALUE IF IT'S ABOVE A CERTAIN THRESHOLD SO ANYTHING THAT HAS LESS THAN SAY NOUGHT POINT FIVE IMPORTANCE GETS ASSIGNED TO ZERO YEAH THAT'S THE OTHER TH YEAH I THINK WE'LL HAVE TO BUFFER THE AUDIO
<|0.00|> They'll need to be calculated at word level, though,<|2.60|><|2.64|> because otherwise there won't be enough occurrences of the terms<|5.60|><|5.60|> to make any meaningful sense.<|7.86|><|8.16|> Yeah, I reckon you can just mean it over the sentence.<|10.54|><|10.80|> I think we should filter them.<|12.12|><|12.24|> Maybe we should have, like, a cut-off<|14.80|><|14.80|> so a word only gets a value if it's above a certain threshold.<|19.62|><|20.30|> So anything that has less than, say, 0.5 importance<|24.18|><|24.18|> gets assigned to zero.<|25.44|><|25.56|> Yeah, that's the other one.<|26.62|><|26.66|> I think we'll have to buffer the audio.<|28.74|>
but it will need to be calculated at word level though because otherwise there will not be enough occurrences of the terms to make any meaningful sense yeah yeah i reckon you can just mean it over the sentence i think we should filter them maybe we should have like a cut off so it a w word only gets a value if it is above a certain threshold so anything that has less than say nought .5 importance gets assigned to 0 yeah that is the other th yeah i think we will have to buffer the audio
they will need to be calculated at word level though because otherwise there will not be enough occurrences of the terms to make any meaningful sense yeah i reckon you can just mean it over the sentence i think we should filter them maybe we should have like a cut off so a word only gets a value if it is above a certain threshold so anything that has less than say 0.5 importance gets assigned to 0 yeah that is the other one i think we will have to buffer the audio
9.090909
WELL THE TROUBLE WITH DOING IT ON THE WORD LEVEL IS IF YOU WANT THE AUDIO TO SYNCH UP YOU'VE GOT NO WAY OF GETTING IN AND EXTRACTING JUST THAT WORD I MEAN IT'S IMPOSSIBLE FOR EVERY SINGLE WORD OH OKAY YEAH I DON'T THINK THAT WILL DO IT WE'LL HAVE TO BUFFER IT WELL THE SKIMMING'S GONNA USE THE IMPORTANCE BUT LIKE AT FIRST IT'S JUST GONNA BE I. D. F. WELL MOSTLY SKIMMING YEAH YEAH WELL THE NICE THING ABOUT THAT IS IT WILL AUTOMATICALLY BE IN SENTENCES WELL MORE OR LESS SO IT WILL MAKE MORE SENSE AND IF YOU GET JUST EXTRACT WORDS YEAH I SEE IT
<|0.00|> Well, the trouble with doing it on the word level is if you want the audio to sync up,<|3.12|><|3.16|> you've got no way of getting in and extracting just that word.<|5.86|><|6.62|> I mean, it's impossible for every single word.<|8.66|><|9.32|> Oh, okay.<|9.94|><|10.08|> Yeah.<|10.26|><|10.32|> I don't think the player will do it.<|11.82|><|11.88|> We'll have to buffer it.<|12.74|><|12.80|> Well, the skimming's going to use the importance.<|14.74|><|14.96|> But, like, at first, it's just going to be idea.<|17.24|><|17.36|> Well, mostly skimming, yeah.<|18.56|><|18.62|> Yeah.<|18.80|><|18.80|> Well, the nice thing about that is it will automatically be in sentences.<|21.98|><|22.42|> Well, more or less.<|23.26|><|24.50|> So it will make more sense.<|25.68|><|25.78|> And if you just extract words, yeah.<|27.42|><|27.68|> That's it.<|28.04|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:91122218:886124
27.690001
BUT I DON'T THINK IT WILL BE VERY HARD I THINK IT WOULD BE LIKE AN HOUR OR TWO'S WORK LIKE JUST BUILD AN ANOTHER F WAVE FILE ESSENTIALLY YEAH I MEAN I BET THERE WOULD BE PACKAGES IN MEMORY YEAH SO JUST LIKE UNP THERE'S BOUND TO BE LIKE A MEDIA WAVE OBJECT OR SOMETHING LIKE THAT AND JUST BUILD ONE IN MEMORY I DON'T KNOW I HAVE NO IDEA BUT IT MUST HAVE LIKE CLASSES FOR DEALING WITH FILES AND IF IT HAS CLASSES FOR CONCATENATING FILES YOU CAN DO IT IN MEMORY SO
<|0.00|> But I don't think it'll be very hard. I think it'll be like an hour or two's work.<|3.46|><|3.46|> Like just build another wave file essentially.<|7.64|><|7.64|> Yeah, I mean I bet there will be packages.<|9.96|><|9.96|> In memory, yeah.<|11.24|><|11.24|> So just like, there's bound to be like a media wave object or something like that.<|16.08|><|16.08|> And just build one in memory.<|17.62|><|17.62|> I don't know. I have no idea.<|19.70|><|19.70|> But it must have like classes for dealing with files.<|23.26|><|23.26|> And if it has classes for concatenating files you can do it in memory.<|27.70|>
but i do not think it will be very hard i think it would be like an hour or 2 is work like just build an another f wave file essentially yeah i mean i bet there would be packages in memory yeah so just like unp there is bound to be like a media wave object or something like that and just build one in memory i do not know i have no idea but it must have like classes for dealing with files and if it has classes for concatenating files you can do it in memory so
but i do not think it will be very hard i think it will be like an hour or 2 is work like just build another wave file essentially yeah i mean i bet there will be packages in memory yeah so just like there is bound to be like a media wave object or something like that and just build one in memory i do not know i have no idea but it must have like classes for dealing with files and if it has classes for concatenating files you can do it in memory
6
BUT IT'LL NEED TO BE CALCULATED AT WORD LEVEL THOUGH BECAUSE OTHERWISE THERE WON'T BE ENOUGH OCCURRENCES OF THE TERMS TO MAKE ANY MEANINGFUL SENSE YEAH YEAH I RECKON YOU CAN JUST MEAN IT OVER THE SENTENCE I THINK WE SHOULD FILTER THEM MAYBE WE SHOULD HAVE LIKE UM A CUT OFF SO IT A W WORD ONLY GETS A VALUE IF IT'S ABOVE A CERTAIN THRESHOLD SO ANYTHING THAT HAS LESS THAN SAY NOUGHT POINT FIVE IMPORTANCE GETS ASSIGNED TO ZERO YEAH THAT'S THE OTHER TH YEAH I THINK WE'LL HAVE TO BUFFER THE AUDIO
<|0.00|> They'll need to be calculated at word level, though,<|2.60|><|2.64|> because otherwise there won't be enough occurrences of the terms<|5.60|><|5.60|> to make any meaningful sense.<|7.86|><|8.16|> Yeah, I reckon you can just mean it over the sentence.<|10.54|><|10.80|> I think we should filter them.<|12.12|><|12.24|> Maybe we should have, like, a cut-off<|14.80|><|14.80|> so a word only gets a value if it's above a certain threshold.<|19.62|><|20.30|> So anything that has less than, say, 0.5 importance<|24.18|><|24.18|> gets assigned to zero.<|25.44|><|25.56|> Yeah, that's the other one.<|26.62|><|26.66|> I think we'll have to buffer the audio.<|28.74|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:39417660:907882
28.369938
WELL WHAT I THINK I MIGHT TRY AND BUILD IS BASICALLY A CLASS THAT YOU JUST FEED IT A LINKED LIST OF UM DIFFERENT WAVE FORMS AND IT WILL JUST STRING THEM ALL TOGETHER WITH MAYBE I DON'T KNOW TENTH OF A SECOND SILENCE IN BETWEEN EACH ONE OR SOMETHING LIKE THAT NORMALISE IT YEAH OH YEAH YEAH WE'LL NEED THAT WE ALSO REALLY WANNA BE ABLE TO SEARCH BY WHO'S SPEAKING AS WELL IT DOESN'T MATTER 'CAUSE ALL THE CALCULATION'S DONE OFF LINE THAT'S EASY YOU JUST LIKE CREATE A NEW X. M. L. DOCUMENT IN MEMORY
<|0.00|> Well what I think I might try and build is basically a class that you just feed it a<|4.82|><|4.82|> linked list of different waveforms that will just string them all together with maybe a<|12.94|><|12.94|> tenth of a second silence in between each one or something like that.<|16.12|><|16.12|> Normalise it, yeah.<|17.12|><|17.12|> Oh yeah, yeah we need that.<|18.54|><|18.54|> We also really want to be able to search by who's speaking as well.<|21.40|><|21.40|> It doesn't matter because all the calculation is done offline.<|24.44|><|24.44|> That's easy, we just create a new XML document in memory.<|28.38|>
well what i think i might try and build is basically a class that you just feed it a linked list of different wave forms and it will just string them all together with maybe i do not know 10th of a 2nd silence in between each one or something like that normalize it yeah 0 yeah yeah we will need that we also really want to be able to search by who is speaking as well it does not matter cause all the calculation is done off line that is easy you just like create a new x m l document in memory
well what i think i might try and build is basically a class that you just feed it a linked list of different waveforms that will just string them all together with maybe a 10th of a 2nd silence in between each one or something like that normalize it yeah 0 yeah yeah we need that we also really want to be able to search by who is speaking as well it does not matter because all the calculation is done offline that is easy we just create a new xml document in memory
16.346153
BUT I DON'T THINK IT WILL BE VERY HARD I THINK IT WOULD BE LIKE AN HOUR OR TWO'S WORK LIKE JUST BUILD AN ANOTHER F WAVE FILE ESSENTIALLY YEAH I MEAN I BET THERE WOULD BE PACKAGES IN MEMORY YEAH SO JUST LIKE UNP THERE'S BOUND TO BE LIKE A MEDIA WAVE OBJECT OR SOMETHING LIKE THAT AND JUST BUILD ONE IN MEMORY I DON'T KNOW I HAVE NO IDEA BUT IT MUST HAVE LIKE CLASSES FOR DEALING WITH FILES AND IF IT HAS CLASSES FOR CONCATENATING FILES YOU CAN DO IT IN MEMORY SO
<|0.00|> But I don't think it'll be very hard. I think it'll be like an hour or two's work.<|3.46|><|3.46|> Like just build another wave file essentially.<|7.64|><|7.64|> Yeah, I mean I bet there will be packages.<|9.96|><|9.96|> In memory, yeah.<|11.24|><|11.24|> So just like, there's bound to be like a media wave object or something like that.<|16.08|><|16.08|> And just build one in memory.<|17.62|><|17.62|> I don't know. I have no idea.<|19.70|><|19.70|> But it must have like classes for dealing with files.<|23.26|><|23.26|> And if it has classes for concatenating files you can do it in memory.<|27.70|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:6456698:861164
26.91
I DON'T THINK IT'S REALLY THAT MUCH OF A PROBLEM BECAUSE IF IT'S TOO BIG WHAT WE CAN DO IS JUST WELL ALL THE OFF LINE STUFF DOESN'T REALLY MATTER AND ALL WE CAN DO IS JUST PROCESS A BIT AT A TIME LIKE FOR SUMMARISATION SAY WE WANTED A HUNDRED UTTERANCES IN THE SUMMARY JUST LOOK AT THE MEETING TAKE THE TOP ONE HUNDRED UTTERANCES IN EACH OTHER MEETING IF IT SCORES HIGHER THAN THE ONES ALREADY IN THE SUMMARY SO FAR JUST REPLACE THEM AND THEN YOU ONLY HAVE TO PROCESS ONE MEETING AT A TIME
<|0.00|> I don't think it's really that much of a problem because if it's too big, what we can do is just, well, all the offline stuff doesn't really matter.<|6.72|><|7.32|> And all we can do is just process a bit at a time, like for summarization, say we wanted 100 utterances in the summary.<|13.98|><|14.58|> Just look at the meeting, take the top 100 utterances in each other meeting if it scores higher than the ones already in the summary so far, just replace them and then you only have to process one meeting at a time.<|26.92|>
i do not think it is really that much of a problem because if it is too big what we can do is just well all the off line stuff does not really matter and all we can do is just process a bit at a time like for summarisation say we wanted a 100 utterances in the summary just look at the meeting take the top 100 utterances in each other meeting if it scores higher than the ones already in the summary so far just replace them and then you only have to process one meeting at a time
i do not think it is really that much of a problem because if it is too big what we can do is just well all the offline stuff does not really matter and all we can do is just process a bit at a time like for summarization say we wanted 100 utterances in the summary just look at the meeting take the top 100 utterances in each other meeting if it scores higher than the ones already in the summary so far just replace them and then you only have to process one meeting at a time
3.960396
WELL WHAT I THINK I MIGHT TRY AND BUILD IS BASICALLY A CLASS THAT YOU JUST FEED IT A LINKED LIST OF UM DIFFERENT WAVE FORMS AND IT WILL JUST STRING THEM ALL TOGETHER WITH MAYBE I DON'T KNOW TENTH OF A SECOND SILENCE IN BETWEEN EACH ONE OR SOMETHING LIKE THAT NORMALISE IT YEAH OH YEAH YEAH WE'LL NEED THAT WE ALSO REALLY WANNA BE ABLE TO SEARCH BY WHO'S SPEAKING AS WELL IT DOESN'T MATTER 'CAUSE ALL THE CALCULATION'S DONE OFF LINE THAT'S EASY YOU JUST LIKE CREATE A NEW X. M. L. DOCUMENT IN MEMORY
<|0.00|> Well what I think I might try and build is basically a class that you just feed it a<|4.82|><|4.82|> linked list of different waveforms that will just string them all together with maybe a<|12.94|><|12.94|> tenth of a second silence in between each one or something like that.<|16.12|><|16.12|> Normalise it, yeah.<|17.12|><|17.12|> Oh yeah, yeah we need that.<|18.54|><|18.54|> We also really want to be able to search by who's speaking as well.<|21.40|><|21.40|> It doesn't matter because all the calculation is done offline.<|24.44|><|24.44|> That's easy, we just create a new XML document in memory.<|28.38|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:115731280:890604
27.83
MODULE CHANGES THAT RATHER THAN THE UNDERLYING DATA AND THEN HAVE THAT X. M. L. UH NITE X. M. L. DOCUMENT TIED TO THE INTERFACE WELL YOU CAN MAKE IT IN A FILE IF YOU WANT MM-HMM THEY ARE UTTERANCES AREN'T THEY THE SEGMENTS ARE UTTERANCES AREN'T THEY YEAH ALRIGHT OKAY WELL THAT'S EASY WELL IT'S CLOSE ENOUGH ISN'T IT IT MAY NOT BE EXACT EVERY TIME BUT IT'S A SO SORT OF SIZE WE'RE LOOKING FOR YEAH YEAH YEAH BUT WHY DON'T WE JUST WRITE IT AS A NEW X. M. L. FILE
<|0.00|> module changes that<|2.04|><|2.04|> rather than the underlying data<|4.38|><|4.38|> and then have that<|6.10|><|6.10|> xml, night xml document tied to the interface. Well you can make it in a file if you want<|11.34|><|11.34|> they are utterances aren't they?<|13.36|><|13.36|> the segments are utterances aren't they?<|16.38|><|16.38|> alright ok, well that's easy, well it's close enough isn't it?<|19.90|><|19.90|> it may not be exact every time but it's the sort of size we're looking for<|23.80|><|23.80|> yeah yeah yeah, well why don't we just write it as a new xml file<|27.82|>
module changes that rather than the underlying data and then have that x m l nite x m l document tied to the interface well you can make it in a file if you want they are utterances are not they the segments are utterances are not they yeah alright okay well that is easy well it is close enough is not it it may not be exact every time but it is a so sort of size we are looking for yeah yeah yeah but why do not we just write it as a new x m l file
module changes that rather than the underlying data and then have that xml night xml document tied to the interface well you can make it in a file if you want they are utterances are not they the segments are utterances are not they alright ok well that is easy well it is close enough is not it it may not be exact every time but it is the sort of size we are looking for yeah yeah yeah well why do not we just write it as a new xml file
15
OKAY SO MAYBE WE SHOULD BUILD A B STORE A MEAN MEASURE FOR THE SEGMENTS AND MEETINGS AS WELL AND SPEAKER SPEAKER AND UM TOPIC SEGMENTING WE'LL NEED AS WELL YEAH WELL YEAH AND THEN IT'LL F PRESERVE THE ORDER WHEN IT'S DISPLAYED THE YEAH YEAH YEAH I THINK SO SO WE SHOULD BASICALLY MAKE OUR OWN X. M. L. DOCUMENT IN MEMORY THAT EVERYONE'S UM
<|0.00|> Okay, so maybe we should build a...<|3.12|><|3.12|> store a mean measure for the segments and meetings as well.<|7.22|><|7.30|> And the speaker.<|7.94|><|9.20|> Speaker and topic segmenting will need as well.<|14.98|><|15.10|> Well, yeah, and then it'll preserve the order when it's displayed.<|19.88|><|20.36|> Yeah.<|20.62|><|21.62|> So we should basically make our own XML document in memory<|26.06|><|26.06|> that everyone's...<|28.12|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:76362452:902764
28.209999
CAN NITE HANDLE JUST LOADING ARBITRARY UH NEW LIKE ATTRIBUTES AND STUFF I MEAN I WOULD HAVE THOUGHT THEY'D MAKE IT ABLE TO YEAH SO WHY DO WE NEED TO HAVE TWO X. M. L. TREES IN MEMORY AT ONCE THE OTHER THING IS THAT WOULD MEAN WE'D BE USING THEIR PARSER AS WELL WHICH MEANS WE WOULDN'T HAVE TO PARSE ANYTHING WHICH BE QUITE NICE 'CAUSE THEIR PARSER IS PROBABLY MUCH FASTER THAN ANYTHING WE'VE COME UP WITH ANYWAY YEAH I MEAN WE CAN PROCESS IT IN CHUNKS IF IT GETS TOO BIG BASICALLY WE CAN JUST PROCESS IT ALL IN CHUNKS IF IT GETS TOO BIG TO LOAD IT INTO MEMORY
<|0.00|> can Knight handle just loading arbitrary new attributes and stuff?<|5.64|><|5.64|> I mean, I would have thought they'd make it able to.<|7.72|><|7.72|> So why do we need to have two XML trees in memory at once?<|10.72|><|10.72|> The other thing is that would mean we'd be using their parser as well,<|13.88|><|13.88|> which means we wouldn't have to parse anything, which would be quite nice<|17.06|><|17.06|> because their parser is probably much faster than anything we've come up with anyway.<|20.32|><|20.32|> Yeah, I mean, we can process it in chunks if it gets too big, basically.<|23.82|><|23.82|> We can just process it all in chunks if it gets too big to load into memory.<|28.22|>
can nite handle just loading arbitrary new like attributes and stuff i mean i would have thought they would make it able to yeah so why do we need to have 2 x m l trees in memory at once the other thing is that would mean we would be using their parser as well which means we would not have to parse anything which be quite nice cause their parser is probably much faster than anything we have come up with anyway yeah i mean we can process it in chunks if it gets too big basically we can just process it all in chunks if it gets too big to load it into memory
can knight handle just loading arbitrary new attributes and stuff i mean i would have thought they would make it able to so why do we need to have 2 xml trees in memory at once the other thing is that would mean we would be using their parser as well which means we would not have to parse anything which would be quite nice because their parser is probably much faster than anything we have come up with anyway yeah i mean we can process it in chunks if it gets too big basically we can just process it all in chunks if it gets too big to load into memory
7.758621
MODULE CHANGES THAT RATHER THAN THE UNDERLYING DATA AND THEN HAVE THAT X. M. L. UH NITE X. M. L. DOCUMENT TIED TO THE INTERFACE WELL YOU CAN MAKE IT IN A FILE IF YOU WANT MM-HMM THEY ARE UTTERANCES AREN'T THEY THE SEGMENTS ARE UTTERANCES AREN'T THEY YEAH ALRIGHT OKAY WELL THAT'S EASY WELL IT'S CLOSE ENOUGH ISN'T IT IT MAY NOT BE EXACT EVERY TIME BUT IT'S A SO SORT OF SIZE WE'RE LOOKING FOR YEAH YEAH YEAH BUT WHY DON'T WE JUST WRITE IT AS A NEW X. M. L. FILE
<|0.00|> module changes that<|2.04|><|2.04|> rather than the underlying data<|4.38|><|4.38|> and then have that<|6.10|><|6.10|> xml, night xml document tied to the interface. Well you can make it in a file if you want<|11.34|><|11.34|> they are utterances aren't they?<|13.36|><|13.36|> the segments are utterances aren't they?<|16.38|><|16.38|> alright ok, well that's easy, well it's close enough isn't it?<|19.90|><|19.90|> it may not be exact every time but it's the sort of size we're looking for<|23.80|><|23.80|> yeah yeah yeah, well why don't we just write it as a new xml file<|27.82|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:83214276:877164
27.41
I THINK WE PROBABLY WANT TO STORE SORRY I THINK WE PROBABLY WANT TO STORE UM A HIERARCHICAL INFORMATION DENSITY AS WELL SO LIKE AN INFORMAN MATION DENSITY SCORE FOR EACH MEETING AND EACH TOPIC SEGMENT 'CAUSE OTHERWISE WE'D BE RECALCULATING THE SAME THING OVER AND OVER AND OVER AGAIN YEAH AND THAT WILL OBVIOUSLY MAKE IT MUCH EASIER TO DISPLAY WELL IT MAY NOT FOR THE WHOLE MEETING BUT LIKE YEAH EXACTLY YEAH WELL WE CAN START OFF LIKE THAT
<|0.00|> I think we probably want to store, sorry, I think we probably want to store, um, a hierarchical<|7.88|><|7.88|> information density as well.<|9.62|><|9.62|> So like an information density score for each meeting and each topic segment, because otherwise<|15.64|><|15.64|> we'll be recalculating the same thing over and over and over again.<|19.12|><|19.12|> And that will obviously make it much easier to display.<|22.12|><|22.12|> Well, maybe not for the whole meeting, but like, yeah, exactly.<|25.42|><|25.42|> Yeah.<|26.42|><|26.42|> Well, we can start off by that.<|27.42|>
i think we probably want to store sorry i think we probably want to store a hierarchical information density as well so like an informan mation density score for each meeting and each topic segment cause otherwise we would be recalculating the same thing over and over and over again yeah and that will obviously make it much easier to display well it may not for the whole meeting but like yeah exactly yeah well we can start off like that
i think we probably want to store sorry i think we probably want to store a hierarchical information density as well so like an information density score for each meeting and each topic segment because otherwise we will be recalculating the same thing over and over and over again and that will obviously make it much easier to display well maybe not for the whole meeting but like yeah exactly yeah well we can start off by that
9.876543
CAN NITE HANDLE JUST LOADING ARBITRARY UH NEW LIKE ATTRIBUTES AND STUFF I MEAN I WOULD HAVE THOUGHT THEY'D MAKE IT ABLE TO YEAH SO WHY DO WE NEED TO HAVE TWO X. M. L. TREES IN MEMORY AT ONCE THE OTHER THING IS THAT WOULD MEAN WE'D BE USING THEIR PARSER AS WELL WHICH MEANS WE WOULDN'T HAVE TO PARSE ANYTHING WHICH BE QUITE NICE 'CAUSE THEIR PARSER IS PROBABLY MUCH FASTER THAN ANYTHING WE'VE COME UP WITH ANYWAY YEAH I MEAN WE CAN PROCESS IT IN CHUNKS IF IT GETS TOO BIG BASICALLY WE CAN JUST PROCESS IT ALL IN CHUNKS IF IT GETS TOO BIG TO LOAD IT INTO MEMORY
<|0.00|> can Knight handle just loading arbitrary new attributes and stuff?<|5.64|><|5.64|> I mean, I would have thought they'd make it able to.<|7.72|><|7.72|> So why do we need to have two XML trees in memory at once?<|10.72|><|10.72|> The other thing is that would mean we'd be using their parser as well,<|13.88|><|13.88|> which means we wouldn't have to parse anything, which would be quite nice<|17.06|><|17.06|> because their parser is probably much faster than anything we've come up with anyway.<|20.32|><|20.32|> Yeah, I mean, we can process it in chunks if it gets too big, basically.<|23.82|><|23.82|> We can just process it all in chunks if it gets too big to load into memory.<|28.22|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:36883866:868524
27.139999
THERE'S JUST LIKE FOR A BASELINE REALLY WELL I'M HALF WAY THROUGH IT'S NOT WORKING YET BUT IT WILL DO UM YEAH AND THEN AVERAGING IT OVER THE UTTERANCES BUT IT'S NOT LIKE UM RELATED TO THE CORPUS AT ALL IT'S JUST WORKING ON AN ARBITRARY TEXT FILE AT THE MOMENT NO IT WOULD BE USEFUL TO KNOW HOW EVERYONE'S GONNA STORE THEIR THINGS THOUGH YEAH YEAH WELL I'VE GOT LIKE A FEW HOURS FREE LIKE AFTER THIS IT'S THE MOST BORING TASK YEAH
<|0.00|> It's just like for a baseline really.<|1.70|><|1.70|> Well, I'm halfway through.<|2.94|><|2.94|> It's not working yet, but it will do.<|4.94|><|4.94|> Um, yeah.<|5.50|><|6.64|> And then averaging it over the utterances.<|8.80|><|8.80|> But it's not like related to the corpus at all.<|11.68|><|11.68|> It's just working on an arbitrary text file at the moment.<|15.28|><|15.28|> No.<|15.62|><|15.62|> It would be useful to know how everyone's going to store their things though.<|20.22|><|20.22|> Yeah.<|20.56|><|20.56|> Yeah.<|21.48|><|21.48|> Well, I've got like a few hours free.<|24.32|><|24.32|> Like after this.<|25.26|><|25.26|> It's a boring task.<|26.66|><|26.66|> Yeah.<|27.16|>
there is just like for a baseline really well i am half way through it is not working yet but it will do yeah and then averaging it over the utterances but it is not like related to the corpus at all it is just working on an arbitrary text file at the moment no it would be useful to know how everyone is going to store their things though yeah yeah well i have got like a few hours free like after this it is the most boring task yeah
it is just like for a baseline really well i am halfway through it is not working yet but it will do yeah and then averaging it over the utterances but it is not like related to the corpus at all it is just working on an arbitrary text file at the moment no it would be useful to know how everyone is going to store their things though yeah yeah well i have got like a few hours free like after this it is a boring task yeah
5.494505
WELL I WAS GONNA START OFF I'VE V GOT SORT OF HALF WAY THROUGH IMPLEMENTING ONE THAT DOES JUST I. D. F. AND THEN JUST I CAN CHANGE THAT TO WORK ON WHATEVER YEAH AND IT SHOULD BE WEIGHTED BY STUFF LIKE THE HOT SPOTS AND UM THE KEY WORDS IN THE SEARCH AND STUFF LIKE THAT DID HE NOT SAY SOMETHING ABOUT NAMED ENTITIES SO I THOUGHT HE SAID THERE WASN'T VERY MANY YEAH YEAH IT'S NOT T. F. I. D. F. IT'S JUST INVERSE DOCUMENT FREQUENCY 'CAUSE IT'S REALLY EASY TO DO BASICALLY
<|0.00|> I was going to start off, I've got sort of halfway through<|2.60|><|2.60|> Implementing one that does just IDF<|5.20|><|5.20|> And then just, I can change that to work on whatever<|8.68|><|8.68|> And it should be weighted by stuff like the hotspots<|11.78|><|11.78|> And the keywords and the search and stuff like that<|15.36|><|15.36|> Did he not say something about named entities?<|17.24|><|17.64|> I thought he said there wasn't very many<|19.32|><|19.32|> Yeah<|19.72|><|19.72|> It's not TFIDF, it's just inverse document frequency<|24.96|><|24.96|> Because it's really easy to do basically<|28.78|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:12362484:874924
27.34
OR AT LEAST UM SIMPLE VERSIONS OF THEM SO MAYBE WE SHOULD TRY DOING SOMETHING REALLY SIMPLE LIKE JUST DISPLAYING A WHOLE MEETING AND LIKE JUST BEING ABLE TO SCROLL THROUGH IT OR SOMETHING LIKE THAT YEAH ARE YOU FREE AFTER THIS HOW ABOUT FRIDAY THEN 'CAUSE I'M OFF ALL FRIDAY UH WEDNESDAY I'VE GOT A NINE 'TIL TWELVE YEAH NOTHING IN THE AFTERNOON I'VE GOT NOTHING IN THE AFTERNOON SO OKAY SO YOU HA YEAH WHERE ABOUT JUST IN APPLETON TOWER UH I'LL BE IN UM THE APPLETON TOWER ANYWAY
<|0.00|> Or at least simple versions of them.<|2.92|><|3.02|> So maybe we should try doing something really simple like just displaying a whole meeting.<|7.36|><|7.66|> And like just being able to scroll through it or something like that.<|10.56|><|10.74|> Yeah.<|10.90|><|11.06|> Are you free after this?<|12.22|><|12.28|> How about Friday then?<|13.40|><|13.50|> Because I'm off all Friday.<|14.46|><|15.04|> Wednesday I've got a 9 till 12.<|17.32|><|17.44|> Yeah, nothing in the afternoon.<|18.60|><|19.42|> I've got nothing in the afternoon.<|20.72|><|21.06|> Okay, so yeah.<|22.46|><|22.58|> What about just in Afton Tower?<|23.84|><|23.84|> I'll be in the Afton Tower one anyway.<|27.34|>
or at least simple versions of them so maybe we should try doing something really simple like just displaying a whole meeting and like just being able to scroll through it or something like that yeah are you free after this how about friday then cause i am off all friday wednesday i have got a 9 til 12 yeah nothing in the afternoon i have got nothing in the afternoon so okay so you ha yeah where about just in appleton tower i will be in the appleton tower anyway
or at least simple versions of them so maybe we should try doing something really simple like just displaying a whole meeting and like just being able to scroll through it or something like that yeah are you free after this how about friday then because i am off all friday wednesday i have got a 9 till 12 yeah nothing in the afternoon i have got nothing in the afternoon okay so yeah what about just in afton tower i will be in the afton tower one anyway
9.89011
THERE'S JUST LIKE FOR A BASELINE REALLY WELL I'M HALF WAY THROUGH IT'S NOT WORKING YET BUT IT WILL DO UM YEAH AND THEN AVERAGING IT OVER THE UTTERANCES BUT IT'S NOT LIKE UM RELATED TO THE CORPUS AT ALL IT'S JUST WORKING ON AN ARBITRARY TEXT FILE AT THE MOMENT NO IT WOULD BE USEFUL TO KNOW HOW EVERYONE'S GONNA STORE THEIR THINGS THOUGH YEAH YEAH WELL I'VE GOT LIKE A FEW HOURS FREE LIKE AFTER THIS IT'S THE MOST BORING TASK YEAH
<|0.00|> It's just like for a baseline really.<|1.70|><|1.70|> Well, I'm halfway through.<|2.94|><|2.94|> It's not working yet, but it will do.<|4.94|><|4.94|> Um, yeah.<|5.50|><|6.64|> And then averaging it over the utterances.<|8.80|><|8.80|> But it's not like related to the corpus at all.<|11.68|><|11.68|> It's just working on an arbitrary text file at the moment.<|15.28|><|15.28|> No.<|15.62|><|15.62|> It would be useful to know how everyone's going to store their things though.<|20.22|><|20.22|> Yeah.<|20.56|><|20.56|> Yeah.<|21.48|><|21.48|> Well, I've got like a few hours free.<|24.32|><|24.32|> Like after this.<|25.26|><|25.26|> It's a boring task.<|26.66|><|26.66|> Yeah.<|27.16|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:47417120:916204
28.629999
UM WELL I'LL BE THERE FROM TWELVE I'VE GOT SOME OTHER STUFF THAT NEEDS DONE ON MATLAB SO IF YOU'RE NOT THERE AT TWELVE I CAN JUST WORK ON THAT SO YEAH WHY W YEAH I'M JUST BUILDING A DICTIONARY OH MINE'S JUST GONNA USE THE UM HASH MAP ONE IN UM JAVA 'CAUSE I'M ONLY GONNA DO IT ON SMALL DOCUMENTS IT'S JUST LIKE BEF UNTIL THE INFORMATION DENSITY IS UP AND RUNNING JUST SOMETHING TO GET GIVE ME SOMETHING TO WORK WITH SO IT'S ONLY GONNA USE QUITE SMALL DOCUMENTS YOU SEE TO START WITH
<|0.00|> Well, I'll be there from 12. I've got some other stuff that needs done on Matlab, so<|6.12|><|6.12|> if you're not there at 12, I can just work on that.<|9.12|><|9.12|> Yeah, I'm just building a dictionary. I was just going to use the HashMap one in Java,<|16.12|><|16.12|> because I'm only going to do it on small documents. It's just like until the information density<|22.06|><|22.06|> is up and running. Just something to give me something to work with. So I was only going<|25.88|><|25.88|> to use quite small documents, you see, to start with.<|28.64|>
well i will be there from 12 i have got some other stuff that needs done on matlab so if you are not there at 12 i can just work on that so yeah why w yeah i am just building a dictionary 0 mine is just going to use the hash map one in java cause i am only going to do it on small documents it is just like bef until the information density is up and running just something to get give me something to work with so it is only going to use quite small documents you see to start with
well i will be there from 12 i have got some other stuff that needs done on matlab so if you are not there at 12 i can just work on that yeah i am just building a dictionary i was just going to use the hashmap one in java because i am only going to do it on small documents it is just like until the information density is up and running just something to give me something to work with so i was only going to use quite small documents you see to start with
13.333333
OR AT LEAST UM SIMPLE VERSIONS OF THEM SO MAYBE WE SHOULD TRY DOING SOMETHING REALLY SIMPLE LIKE JUST DISPLAYING A WHOLE MEETING AND LIKE JUST BEING ABLE TO SCROLL THROUGH IT OR SOMETHING LIKE THAT YEAH ARE YOU FREE AFTER THIS HOW ABOUT FRIDAY THEN 'CAUSE I'M OFF ALL FRIDAY UH WEDNESDAY I'VE GOT A NINE 'TIL TWELVE YEAH NOTHING IN THE AFTERNOON I'VE GOT NOTHING IN THE AFTERNOON SO OKAY SO YOU HA YEAH WHERE ABOUT JUST IN APPLETON TOWER UH I'LL BE IN UM THE APPLETON TOWER ANYWAY
<|0.00|> Or at least simple versions of them.<|2.92|><|3.02|> So maybe we should try doing something really simple like just displaying a whole meeting.<|7.36|><|7.66|> And like just being able to scroll through it or something like that.<|10.56|><|10.74|> Yeah.<|10.90|><|11.06|> Are you free after this?<|12.22|><|12.28|> How about Friday then?<|13.40|><|13.50|> Because I'm off all Friday.<|14.46|><|15.04|> Wednesday I've got a 9 till 12.<|17.32|><|17.44|> Yeah, nothing in the afternoon.<|18.60|><|19.42|> I've got nothing in the afternoon.<|20.72|><|21.06|> Okay, so yeah.<|22.46|><|22.58|> What about just in Afton Tower?<|23.84|><|23.84|> I'll be in the Afton Tower one anyway.<|27.34|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:26513492:704044
22
WHY DOES IT NEED TO BE CLASSIFIED INTO LIKE DIFFERENT SEGMENTS CAN WE JUST FILL A SECOND CLASS WITH JUNK THAT WE DON'T CARE ABOUT LIKE I DON'T KNOW COPIES OF SHAKESPEARE OR SOMETHING 'CAUSE IF WHAT WE'RE LOOKING FOR IS THE UM FREQUENCY STATISTICS I DON'T SEE HOW THAT WOULD BE CHANGED BY THE CLASSIFICATION I THE WELL THERE MAYBE ANOTHER TOOL AVAILABLE YEAH UM I CAN'T REMEMBER WHO'S GOT IT MIGHT BE WORDNET
<|0.00|> Why does it need to be classified into different segments?<|3.28|><|3.28|> Can we just fill a second class with junk that we don't care about?<|6.84|><|6.84|> Like, I don't know, copies of Shakespeare or something?<|9.36|><|9.36|> Because if all we're looking for is the frequency statistics,<|12.84|><|12.84|> I don't see how that would be changed by the classification.<|15.24|><|15.24|> I wondered that.<|15.96|><|15.96|> Well, there may be another tool available.<|17.80|><|17.80|> Yeah.<|18.40|><|19.64|> I can't remember who's got it.<|20.80|><|20.80|> It might be WordNet.<|22.00|>
why does it need to be classified into like different segments can we just fill a 2nd class with junk that we do not care about like i do not know copies of shakespeare or something cause if what we are looking for is the frequency statistics i do not see how that would be changed by the classification i the well there maybe another tool available yeah i can not remember who has got it might be wordnet
why does it need to be classified into different segments can we just fill a 2nd class with junk that we do not care about like i do not know copies of shakespeare or something because if all we are looking for is the frequency statistics i do not see how that would be changed by the classification i wondered that well there may be another tool available yeah i can not remember who has got it it might be wordnet
10.126582
UM WELL I'LL BE THERE FROM TWELVE I'VE GOT SOME OTHER STUFF THAT NEEDS DONE ON MATLAB SO IF YOU'RE NOT THERE AT TWELVE I CAN JUST WORK ON THAT SO YEAH WHY W YEAH I'M JUST BUILDING A DICTIONARY OH MINE'S JUST GONNA USE THE UM HASH MAP ONE IN UM JAVA 'CAUSE I'M ONLY GONNA DO IT ON SMALL DOCUMENTS IT'S JUST LIKE BEF UNTIL THE INFORMATION DENSITY IS UP AND RUNNING JUST SOMETHING TO GET GIVE ME SOMETHING TO WORK WITH SO IT'S ONLY GONNA USE QUITE SMALL DOCUMENTS YOU SEE TO START WITH
<|0.00|> Well, I'll be there from 12. I've got some other stuff that needs done on Matlab, so<|6.12|><|6.12|> if you're not there at 12, I can just work on that.<|9.12|><|9.12|> Yeah, I'm just building a dictionary. I was just going to use the HashMap one in Java,<|16.12|><|16.12|> because I'm only going to do it on small documents. It's just like until the information density<|22.06|><|22.06|> is up and running. Just something to give me something to work with. So I was only going<|25.88|><|25.88|> to use quite small documents, you see, to start with.<|28.64|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:42879890:937644
29.299999
BUT ONE OF THESE BIG CORPUSES HAS A LIST OF STOP WORDS THAT YOU CAN DOWNLOAD AND THEY'RE JUST BASICALLY LISTS OF REALLY UNINTERESTING BORING WORDS THAT WE COULD FILTER OUT BEFORE WE DO THAT IT'S LIKE THAT'S ONE THE PAPERS I READ THAT'S UM ONE THINGS THEY DID RIGHT AT THE BEGINNING IS THEY'VE GOT THIS BIG S STOP LIST AND THEY JUST IGNORE ALL OF THOSE THROUGHOUT THE EXPERIMENT YEAH I IT WOULD BE USEFUL FOR ME AS WELL IT UH I THINK THAT'D BE USEFUL FOR ME AS WELL YEAH YEAH
<|0.00|> but one of these big corpses has a list of stop words that you can download<|5.30|><|5.30|> and they're just basically lists of really uninteresting, boring words<|9.00|><|9.00|> that we could filter out before we do that.<|11.90|><|11.90|> It's like that's one of the papers I read,<|13.90|><|13.90|> that's one of the things they did right at the beginning<|16.20|><|16.20|> is they've got this big stop list and they just ignore all of those throughout the experiment.<|22.40|><|22.40|> Yeah, it'd be useful for me as well.<|25.10|><|25.10|> I think that'd be useful for me as well.<|27.60|><|27.60|> Yeah.<|29.30|>
but one of these big corpuses has a list of stop words that you can download and they are just basically lists of really uninteresting boring words that we could filter out before we do that it is like that is one the papers i read that is one things they did right at the beginning is they have got this big s stop list and they just ignore all of those throughout the experiment yeah i it would be useful for me as well it i think that would be useful for me as well yeah yeah
but one of these big corpses has a list of stop words that you can download and they are just basically lists of really uninteresting boring words that we could filter out before we do that it is like that is one of the papers i read that is one of the things they did right at the beginning is they have got this big stop list and they just ignore all of those throughout the experiment yeah it would be useful for me as well i think that would be useful for me as well yeah
8.163265
WHY DOES IT NEED TO BE CLASSIFIED INTO LIKE DIFFERENT SEGMENTS CAN WE JUST FILL A SECOND CLASS WITH JUNK THAT WE DON'T CARE ABOUT LIKE I DON'T KNOW COPIES OF SHAKESPEARE OR SOMETHING 'CAUSE IF WHAT WE'RE LOOKING FOR IS THE UM FREQUENCY STATISTICS I DON'T SEE HOW THAT WOULD BE CHANGED BY THE CLASSIFICATION I THE WELL THERE MAYBE ANOTHER TOOL AVAILABLE YEAH UM I CAN'T REMEMBER WHO'S GOT IT MIGHT BE WORDNET
<|0.00|> Why does it need to be classified into different segments?<|3.28|><|3.28|> Can we just fill a second class with junk that we don't care about?<|6.84|><|6.84|> Like, I don't know, copies of Shakespeare or something?<|9.36|><|9.36|> Because if all we're looking for is the frequency statistics,<|12.84|><|12.84|> I don't see how that would be changed by the classification.<|15.24|><|15.24|> I wondered that.<|15.96|><|15.96|> Well, there may be another tool available.<|17.80|><|17.80|> Yeah.<|18.40|><|19.64|> I can't remember who's got it.<|20.80|><|20.80|> It might be WordNet.<|22.00|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:61692286:932204
29.129999
IT'S YEAH I MEAN THE WAVE DATA ARE OBVIOUSLY NOT GONNA GET OFF THERE COMPLETELY REALLY OH RIGHT I'LL SEE IF I CAN S. C. P. IT I SUPPOSE I'VE GOT A LINUX BOX AND A WINDOWS BOX SO BROAD BAND PUT IT ON TO C. D. I CAN IF I GET DOWN I CAN PUT TO C. D. YEAH I'M NOT SURE IF THERE'S ENOUGH SPACE IS HOW MUCH DO WE GET REALLY OKAY YEAH BUT I CAN DO IT FROM THAT SESSION CAN'T I YOU CAN COMPRESS IT FROM A REMOTE SESSION AND S. C. P. IT FROM THE SAME SESSION DO YOU THINK
<|0.00|> Because yeah, I mean, the WAV data are obviously not going to get off there completely.<|5.24|><|5.24|> Really?<|6.24|><|6.24|> Oh right.<|7.24|><|7.24|> I'll see if I can SCP it I suppose.<|8.24|><|8.24|> I've got a Linux box and a Windows box so broadband.<|11.24|><|11.24|> Put it onto CD.<|12.24|><|12.24|> I can, if I get it down I can put it to CD.<|16.24|><|16.24|> Yeah.<|17.24|><|17.24|> I'm not sure if there's enough space because how much do we get really?<|20.60|><|20.60|> Yeah, okay.<|21.60|><|21.60|> Yeah, but I can do it from that session, can't I?<|24.06|><|24.06|> I can compress it from a remote session and SCP it from the same session, do you think?<|29.14|>
it is yeah i mean the wave data are obviously not going to get off there completely really 0 right i will see if i can s c p it i suppose i have got a linux box and a windows box so broad band put it on to c d i can if i get down i can put to c d yeah i am not sure if there is enough space is how much do we get really okay yeah but i can do it from that session can not i you can compress it from a remote session and s c p it from the same session do you think
because yeah i mean the wav data are obviously not going to get off there completely really 0 right i will see if i can scp it i suppose i have got a linux box and a windows box so broadband put it onto cd i can if i get it down i can put it to cd yeah i am not sure if there is enough space because how much do we get really yeah okay yeah but i can do it from that session can not i i can compress it from a remote session and scp it from the same session do you think
19.469027
WELL ALL YOU REALLY WANNA DO IS LOOK INTO GETTING SOME SUB SET OF THE ICSI CORPUS OFF THE DICE MACHINES 'CAUSE I HATE WORKING ON DICE IT'S AWFUL LIKE SO I CAN USE MY HOME MACHINE HA HAS A C. D. BURNER THOUGH HAS A C. D. BURNER YEAH THE RIGHT HAND CORNER FAR RIGHT YEAH HOW BIG IS IT WITHOUT UM THE WAV FILES AND STUFF 'CAUSE I COULD JUST SAY AT UM GOING OVER S. C. P. ONE NIGHT AND JUST LEAVE IT GOING ALL NIGHT IF I HAD TO
<|0.00|> What I really want to do is look into getting some subset of the Xe Corpus off the DICE machines.<|5.60|><|6.20|> Because I hate working on DICE.<|7.54|><|7.64|> It's awful.<|8.18|><|8.38|> Like, so I can use my home machine.<|9.96|><|10.18|> Where has a CD burner, though?<|12.28|><|12.72|> Where has a CD burner?<|13.84|><|14.08|> Yeah.<|14.34|><|15.24|> The right-hand corner, far right.<|16.90|><|17.16|> How big is it without the WAV files and stuff?<|21.46|><|21.46|> Because I could just set it going over SCP one night and just leave it going all night if I had to.<|27.10|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:78127488:744364
23.26
YEAH OH NO NO I WAS THINKING OF SSHING JUST INTO SOME MACHINE AND THEN JUST SCPING IT FROM THERE YEAH I MEAN IT HAS TO GO THROUGH THE GATEWAY BUT CAN YOU NOT DO THAT MM I SEE YEAH SO YOU COULD JUST BUT TH FIRST UH HOW BIG ARE THE CHUNKS HOW BIG ARE THE CHUNKS YOU'RE LOOKING AT SO QUITE SMALL THEN SO YOU COULD JUST UM YOU COULD USE JUST THE SAME THING WE USED TO BUILD THE BIG DICTIONARY
<|0.00|> Yeah. I don't know, I was thinking of SSH-ing just into some machine and then just SCP-ing<|6.32|><|6.32|> it from there. I mean it has to go through the gateway but can you not do that?<|9.90|><|9.90|> Mmm, I see.<|10.90|><|10.90|> So you could just...<|11.90|><|11.90|> But first, how big are the chunks?<|13.90|><|13.90|> How big are the chunks you're looking at?<|15.90|><|15.90|> So quite small then.<|17.90|><|17.90|> So you could just, you could use just the same thing we used to build the big dictionary<|23.26|>
yeah 0 no no i was thinking of sshing just into some machine and then just scping it from there yeah i mean it has to go through the gateway but can you not do that i see yeah so you could just but th 1st how big are the chunks how big are the chunks you are looking at so quite small then so you could just you could use just the same thing we used to build the big dictionary
yeah i do not know i was thinking of ssh ing just into some machine and then just scp ing it from there i mean it has to go through the gateway but can you not do that i see so you could just but 1st how big are the chunks how big are the chunks you are looking at so quite small then so you could just you could use just the same thing we used to build the big dictionary
13.414634
IT'S YEAH I MEAN THE WAVE DATA ARE OBVIOUSLY NOT GONNA GET OFF THERE COMPLETELY REALLY OH RIGHT I'LL SEE IF I CAN S. C. P. IT I SUPPOSE I'VE GOT A LINUX BOX AND A WINDOWS BOX SO BROAD BAND PUT IT ON TO C. D. I CAN IF I GET DOWN I CAN PUT TO C. D. YEAH I'M NOT SURE IF THERE'S ENOUGH SPACE IS HOW MUCH DO WE GET REALLY OKAY YEAH BUT I CAN DO IT FROM THAT SESSION CAN'T I YOU CAN COMPRESS IT FROM A REMOTE SESSION AND S. C. P. IT FROM THE SAME SESSION DO YOU THINK
<|0.00|> Because yeah, I mean, the WAV data are obviously not going to get off there completely.<|5.24|><|5.24|> Really?<|6.24|><|6.24|> Oh right.<|7.24|><|7.24|> I'll see if I can SCP it I suppose.<|8.24|><|8.24|> I've got a Linux box and a Windows box so broadband.<|11.24|><|11.24|> Put it onto CD.<|12.24|><|12.24|> I can, if I get it down I can put it to CD.<|16.24|><|16.24|> Yeah.<|17.24|><|17.24|> I'm not sure if there's enough space because how much do we get really?<|20.60|><|20.60|> Yeah, okay.<|21.60|><|21.60|> Yeah, but I can do it from that session, can't I?<|24.06|><|24.06|> I can compress it from a remote session and SCP it from the same session, do you think?<|29.14|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:49288076:815084
25.469999
BUT UM DEPENDING ON THE CONTEXT THE SIZE AND WHAT WE CONSIDER A DOCUMENT IN THE SENSE OF CALCULATING T. F. I. D. F. IS GONNA CHANGE WHICH MIGHT NEED THINKING ABOUT I THINK IT WOULD BE USEFUL YEAH WELL YOU NEED THE RAW FREQUENCY AS WELL BUT UM YOU ALSO NEED HOW MANY TIMES THINGS OCCUR WITHIN EACH DOCUMENT
<|0.00|> But depending on the context, the size, and what we consider a document in the sense of<|7.38|><|7.38|> calculating tf, idf is going to change, which might need thinking about.<|11.96|><|11.96|> I think it would be useful, yeah.<|14.72|><|14.72|> Well, you need the raw frequency as well, but you also need how many times things occur<|23.24|><|23.24|> within each document.<|25.48|>
but depending on the context the size and what we consider a document in the sense of calculating t f i d f is going to change which might need thinking about i think it would be useful yeah well you need the raw frequency as well but you also need how many times things occur within each document
but depending on the context the size and what we consider a document in the sense of calculating tf idf is going to change which might need thinking about i think it would be useful yeah well you need the raw frequency as well but you also need how many times things occur within each document
8.474576
YOU JUST DO THAT ON LINE 'CAUSE THAT WON'T TAKE LONG TO BUILD A LITTLE DICTIONARY THAT BIG WILL IT I MEAN JUST USE THE SAME TOOL THAT WE USE YEAH YEAH IT DOESN'T NEED ORDERED NO UM WELL THAT'S THE T ARE YOU USING T. F. I. D. F. FOR THE INFORMATION DENSITY ALRIGHT OKAY LIKE 'CAUSE FREQUENCY WOULD BE USEFUL I THINK
<|0.00|> just do that online so that won't take long to build a little dictionary that big will it<|6.00|><|7.20|> i mean just use the same tool that we'll use yeah yeah it doesn't need ordered no um well that's it<|14.56|><|14.56|> are you using tf idf for the information density all right like because frequency would be useful i<|23.60|><|23.60|> think yeah<|25.20|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:103376988:871724
27.24
AND UM WHAT WE CONSIDER A DOCUMENT'S GONNA DEPEND ON OUR CONTEXT I THINK 'CAUSE IF WE'RE LOOKING AT THE WHOLE LOT OF MEETINGS WE'LL CONSIDER EACH MEETING A DOCUMENT IN SORT OF TERMS OF THIS ALGORITHM AND IF WE'RE VIEWING LIKE SAY JUST A SMALL TOPIC SEGMENT YOU MIGHT LOOK AT EVEN EACH UTTERANCE AS A SMALL DOCUMENT YEAH BUT THE THING IS UM IT'S GONNA NEED SOME TH TH THOUGHT OF HOW WE ACTUALLY MAYBE IT DOESN'T ACTUALLY MATTER
<|0.00|> And what we consider a document is going to depend on our context, I think.<|4.48|><|5.40|> So if we're looking at a whole lot of meetings, we'll consider each meeting<|8.52|><|8.52|> a document in sort of terms of this algorithm.<|12.04|><|12.04|> And if we're viewing, like, say, just a small topic segment,<|15.72|><|16.68|> you might look at even each utterance<|18.84|><|20.16|> as a small document.<|21.28|><|21.28|> Yeah, the thing is, there's an reason<|23.48|><|24.28|> thought of how actually maybe it doesn't actually matter.<|27.24|>
and what we consider a document is going to depend on our context i think cause if we are looking at the whole lot of meetings we will consider each meeting a document in sort of terms of this algorithm and if we are viewing like say just a small topic segment you might look at even each utterance as a small document yeah but the thing is it is going to need some th th thought of how we actually maybe it does not actually matter
and what we consider a document is going to depend on our context i think so if we are looking at a whole lot of meetings we will consider each meeting a document in sort of terms of this algorithm and if we are viewing like say just a small topic segment you might look at even each utterance as a small document yeah the thing is there is an reason thought of how actually maybe it does not actually matter
12.643678
BUT UM DEPENDING ON THE CONTEXT THE SIZE AND WHAT WE CONSIDER A DOCUMENT IN THE SENSE OF CALCULATING T. F. I. D. F. IS GONNA CHANGE WHICH MIGHT NEED THINKING ABOUT I THINK IT WOULD BE USEFUL YEAH WELL YOU NEED THE RAW FREQUENCY AS WELL BUT UM YOU ALSO NEED HOW MANY TIMES THINGS OCCUR WITHIN EACH DOCUMENT
<|0.00|> But depending on the context, the size, and what we consider a document in the sense of<|7.38|><|7.38|> calculating tf, idf is going to change, which might need thinking about.<|11.96|><|11.96|> I think it would be useful, yeah.<|14.72|><|14.72|> Well, you need the raw frequency as well, but you also need how many times things occur<|23.24|><|23.24|> within each document.<|25.48|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:84091514:822124
25.690001
MAYBE IF YOU JUST DO IT ONCE AT THE HIGHEST LEVEL IT IT WILL BE FINE BUT I WAS JUST THINKING IT MIGHT BE DIFFICULT TO CALCULATE THE T. F. I. D. F. OFF LINE FOR ALL THE DIFFERENT LEVELS WE MIGHT WANT 'CAUSE IF WE'RE GONNA ALLOW DISJOINT SEGMENTS FOR EXAMPLE THEN HOW ARE WE GONNA KNOW WHAT'S GONNA BE IN CONTEXT AT ANY GIVEN TIME BUT I SUPPOSE IF YOU JUST DID IT GLOBALLY TREATING A MEETING AS A DOCUMENT
<|0.00|> maybe if you just do it once at the highest level it will be fine but I was<|6.04|><|6.04|> just thinking it might be difficult to calculate the TF IDF offline for all the<|10.24|><|10.24|> different levels we might want because if we're going to allow disjoint segments<|15.06|><|15.06|> for example then how are we going to know what's going to be in context at any<|19.74|><|19.74|> given time I suppose if you just did it globally treating a meeting as a document<|25.70|>
maybe if you just do it once at the highest level it it will be fine but i was just thinking it might be difficult to calculate the t f i d f off line for all the different levels we might want cause if we are going to allow disjoint segments for example then how are we going to know what is going to be in context at any given time but i suppose if you just did it globally treating a meeting as a document
maybe if you just do it once at the highest level it will be fine but i was just thinking it might be difficult to calculate the tf idf offline for all the different levels we might want because if we are going to allow disjoint segments for example then how are we going to know what is going to be in context at any given time i suppose if you just did it globally treating a meeting as a document
11.494253
AND UM WHAT WE CONSIDER A DOCUMENT'S GONNA DEPEND ON OUR CONTEXT I THINK 'CAUSE IF WE'RE LOOKING AT THE WHOLE LOT OF MEETINGS WE'LL CONSIDER EACH MEETING A DOCUMENT IN SORT OF TERMS OF THIS ALGORITHM AND IF WE'RE VIEWING LIKE SAY JUST A SMALL TOPIC SEGMENT YOU MIGHT LOOK AT EVEN EACH UTTERANCE AS A SMALL DOCUMENT YEAH BUT THE THING IS UM IT'S GONNA NEED SOME TH TH THOUGHT OF HOW WE ACTUALLY MAYBE IT DOESN'T ACTUALLY MATTER
<|0.00|> And what we consider a document is going to depend on our context, I think.<|4.48|><|5.40|> So if we're looking at a whole lot of meetings, we'll consider each meeting<|8.52|><|8.52|> a document in sort of terms of this algorithm.<|12.04|><|12.04|> And if we're viewing, like, say, just a small topic segment,<|15.72|><|16.68|> you might look at even each utterance<|18.84|><|20.16|> as a small document.<|21.28|><|21.28|> Yeah, the thing is, there's an reason<|23.48|><|24.28|> thought of how actually maybe it doesn't actually matter.<|27.24|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:92928214:953644
29.799999
IT'D PROBABLY STILL BE WORK OUT FINE BECAUSE YOU'D ONLY BE COMPARING TO ONES WITHIN THE CONTEXT UH I DON'T KNOW I THOUGHT WERE YOU GONNA USE THAT IN THE END THE INFORMATION DENSITY OH SORRY THAT'S WHAT I MEAN LIKE UM YEAH FOR EACH WORD OR WHATEVER BUT ACROSS THE WHOLE LOT IS WHAT I MEAN BY HIGHEST LEVEL LIKE ACROSS THE WHOLE CORPUS YEAH BUT YOU'D PROBABLY LOOK AT EACH MEETING AS A DOCUMENT MM POSSIBLY ARE THEY BIG ENOUGH TO GET ANYTHING MEANINGFUL OUT OF WELL YEAH THAT IS NOT IT'S NOT AN ISSUE YOU JUST CONCATENATE AN X. M. L. FILE TOGETHER
<|0.00|> it'd probably still be work out fine<|2.84|><|2.84|> because you'd only be comparing to ones within the context.<|6.22|><|6.40|> I don't know, I thought, were you going to use that in the end?<|9.08|><|9.16|> The information density.<|10.18|><|10.48|> Oh, sorry, that's what I mean.<|11.32|><|12.60|> Yeah, for each word or whatever, but across the whole lot<|16.12|><|16.12|> is what I mean by highest level, across the whole corpus.<|19.52|><|19.66|> Yeah, but you'd probably look at each meeting as a document<|22.08|><|22.08|> and possibly, are they big enough to get anything meaningful out of?<|25.70|><|25.76|> Well, yeah, it's not an issue.<|27.30|><|27.30|> You just concatenate an XML file together.<|29.80|>
it would probably still be work out fine because you would only be comparing to ones within the context i do not know i thought were you going to use that in the end the information density 0 sorry that is what i mean like yeah for each word or whatever but across the whole lot is what i mean by highest level like across the whole corpus yeah but you would probably look at each meeting as a document possibly are they big enough to get anything meaningful out of well yeah that is not it is not an issue you just concatenate an x m l file together
it would probably still be work out fine because you would only be comparing to ones within the context i do not know i thought were you going to use that in the end the information density 0 sorry that is what i mean yeah for each word or whatever but across the whole lot is what i mean by highest level across the whole corpus yeah but you would probably look at each meeting as a document and possibly are they big enough to get anything meaningful out of well yeah it is not an issue you just concatenate an xml file together
8.181818
MAYBE IF YOU JUST DO IT ONCE AT THE HIGHEST LEVEL IT IT WILL BE FINE BUT I WAS JUST THINKING IT MIGHT BE DIFFICULT TO CALCULATE THE T. F. I. D. F. OFF LINE FOR ALL THE DIFFERENT LEVELS WE MIGHT WANT 'CAUSE IF WE'RE GONNA ALLOW DISJOINT SEGMENTS FOR EXAMPLE THEN HOW ARE WE GONNA KNOW WHAT'S GONNA BE IN CONTEXT AT ANY GIVEN TIME BUT I SUPPOSE IF YOU JUST DID IT GLOBALLY TREATING A MEETING AS A DOCUMENT
<|0.00|> maybe if you just do it once at the highest level it will be fine but I was<|6.04|><|6.04|> just thinking it might be difficult to calculate the TF IDF offline for all the<|10.24|><|10.24|> different levels we might want because if we're going to allow disjoint segments<|15.06|><|15.06|> for example then how are we going to know what's going to be in context at any<|19.74|><|19.74|> given time I suppose if you just did it globally treating a meeting as a document<|25.70|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:109415574:863724
26.99
BUT WE STILL WANT TO HAVE LIKE A NOTION OF MEETINGS FOR THE USER YEAH SURE YEAH YOU JUST LIKE WHATEVER YOU WANT TO LOOK AT YOU JUST JAM TOGETHER INTO AN X. M. L. FILE AND THAT'S YOUR MEETING EVEN THOUGH BITS OF IT MAY COME FROM ALL OVER THE PLACE OR WHATEVER I MEAN I DON'T SEE WHY THAT'S REALLY A BIG PROBLEM SO BASICALLY WHAT YOU'RE SAYING IS YOU CAN TAKE AN ARBITRARY AMOUNT OF DATA AND PROCESS IT WITH THE SAME ALGORITHM
<|0.00|> but we still want to have like a notion of meetings for the user.<|4.38|><|4.38|> Yeah, sure.<|5.46|><|5.46|> Yeah, you just like whatever you want to look at,<|8.64|><|8.64|> you just jam together into an XML file and that's your meeting,<|12.84|><|12.84|> even though bits of it may have come from all over the place or whatever.<|16.68|><|16.68|> I mean, I don't see why that's really a big problem.<|21.36|><|21.36|> So basically what you're saying is you can take an arbitrary amount of data<|24.78|><|24.78|> and process it with the same algorithm<|27.00|>
but we still want to have like a notion of meetings for the user yeah sure yeah you just like whatever you want to look at you just jam together into an x m l file and that is your meeting even though bits of it may come from all over the place or whatever i mean i do not see why that is really a big problem so basically what you are saying is you can take an arbitrary amount of data and process it with the same algorithm
but we still want to have like a notion of meetings for the user yeah sure yeah you just like whatever you want to look at you just jam together into an xml file and that is your meeting even though bits of it may have come from all over the place or whatever i mean i do not see why that is really a big problem so basically what you are saying is you can take an arbitrary amount of data and process it with the same algorithm
4.444445
IT'D PROBABLY STILL BE WORK OUT FINE BECAUSE YOU'D ONLY BE COMPARING TO ONES WITHIN THE CONTEXT UH I DON'T KNOW I THOUGHT WERE YOU GONNA USE THAT IN THE END THE INFORMATION DENSITY OH SORRY THAT'S WHAT I MEAN LIKE UM YEAH FOR EACH WORD OR WHATEVER BUT ACROSS THE WHOLE LOT IS WHAT I MEAN BY HIGHEST LEVEL LIKE ACROSS THE WHOLE CORPUS YEAH BUT YOU'D PROBABLY LOOK AT EACH MEETING AS A DOCUMENT MM POSSIBLY ARE THEY BIG ENOUGH TO GET ANYTHING MEANINGFUL OUT OF WELL YEAH THAT IS NOT IT'S NOT AN ISSUE YOU JUST CONCATENATE AN X. M. L. FILE TOGETHER
<|0.00|> it'd probably still be work out fine<|2.84|><|2.84|> because you'd only be comparing to ones within the context.<|6.22|><|6.40|> I don't know, I thought, were you going to use that in the end?<|9.08|><|9.16|> The information density.<|10.18|><|10.48|> Oh, sorry, that's what I mean.<|11.32|><|12.60|> Yeah, for each word or whatever, but across the whole lot<|16.12|><|16.12|> is what I mean by highest level, across the whole corpus.<|19.52|><|19.66|> Yeah, but you'd probably look at each meeting as a document<|22.08|><|22.08|> and possibly, are they big enough to get anything meaningful out of?<|25.70|><|25.76|> Well, yeah, it's not an issue.<|27.30|><|27.30|> You just concatenate an XML file together.<|29.80|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:59224412:685804
21.43
IT DOESN'T MATTER CONCEPTUALLY WHAT THAT DATA IS IT COULD BE A MEETING IT COULD BE TWO UTTERANCES IT COULD BE A MEETING PLUS HALF A MEETING FROM SOMEWHERE ELSE I DON'T THINK IT'S VERY DIFFICULT THOUGH I MEAN WHAT YOU DO IS YOU JUST BUILD AN X. M. L. FILE AND IF YOU WANT IT TO GET DOWN TO THE UTTERANCES YOU'D GO TO THE LEAVES AND THEN IF YOU WANTED THE NEXT LEVEL UP
<|0.00|> It doesn't matter conceptually what that data is.<|3.76|><|3.76|> It could be a meeting.<|4.80|><|4.80|> It could be two utterances.<|6.76|><|6.76|> It could be a meeting plus half a meeting from somewhere else.<|9.64|><|9.64|> I think it's very difficult, though.<|11.04|><|11.04|> I mean, what you do is you just build an XML file.<|14.20|><|14.20|> And if you wanted to get down to the utterances,<|16.62|><|16.62|> you'd go to the leaves.<|18.36|><|18.36|> And then if you wanted the next level up,<|21.42|>
it does not matter conceptually what that data is it could be a meeting it could be 2 utterances it could be a meeting plus half a meeting from somewhere else i do not think it is very difficult though i mean what you do is you just build an x m l file and if you want it to get down to the utterances you would go to the leaves and then if you wanted the next level up
it does not matter conceptually what that data is it could be a meeting it could be 2 utterances it could be a meeting plus half a meeting from somewhere else i think it is very difficult though i mean what you do is you just build an xml file and if you wanted to get down to the utterances you would go to the leaves and then if you wanted the next level up
8.75
BUT WE STILL WANT TO HAVE LIKE A NOTION OF MEETINGS FOR THE USER YEAH SURE YEAH YOU JUST LIKE WHATEVER YOU WANT TO LOOK AT YOU JUST JAM TOGETHER INTO AN X. M. L. FILE AND THAT'S YOUR MEETING EVEN THOUGH BITS OF IT MAY COME FROM ALL OVER THE PLACE OR WHATEVER I MEAN I DON'T SEE WHY THAT'S REALLY A BIG PROBLEM SO BASICALLY WHAT YOU'RE SAYING IS YOU CAN TAKE AN ARBITRARY AMOUNT OF DATA AND PROCESS IT WITH THE SAME ALGORITHM
<|0.00|> but we still want to have like a notion of meetings for the user.<|4.38|><|4.38|> Yeah, sure.<|5.46|><|5.46|> Yeah, you just like whatever you want to look at,<|8.64|><|8.64|> you just jam together into an XML file and that's your meeting,<|12.84|><|12.84|> even though bits of it may have come from all over the place or whatever.<|16.68|><|16.68|> I mean, I don't see why that's really a big problem.<|21.36|><|21.36|> So basically what you're saying is you can take an arbitrary amount of data<|24.78|><|24.78|> and process it with the same algorithm<|27.00|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:119205352:693804
21.68
YOU'D GO TO THE PARENTS OF THOSE AND LIKE JUST GO FROM LIKE THE LEAVES INWARDS TOWARDS THE BRANCH TO BUILD UP THINGS LIKE UM YOU KNOW WHEN YOU CLICK ON A SEGMENT IT'S GONNA HAVE LIKE WORDS OR WHATEVER THAT ARE IMPORTANT AS LONG AS LIKE THE ALGORITHMS ARE DESIGNED UM WITH IT IN MIND I DON'T THINK IT'S A VERY BIG PROBLEM
<|0.00|> you'd go to the parents of those and just go from the leaves<|6.42|><|6.42|> inwards towards the branch to build up things like when you<|10.50|><|10.50|> click on a segment, it's going to have words or whatever<|14.52|><|14.52|> that are important.<|15.66|><|15.66|> As long as the algorithms are designed with it in mind,<|20.20|><|20.20|> I don't think it's a very big problem.<|21.68|>
you would go to the parents of those and like just go from like the leaves inwards towards the branch to build up things like you know when you click on a segment it is going to have like words or whatever that are important as long as like the algorithms are designed with it in mind i do not think it is a very big problem
you would go to the parents of those and just go from the leaves inwards towards the branch to build up things like when you click on a segment it is going to have words or whatever that are important as long as the algorithms are designed with it in mind i do not think it is a very big problem
8.955224
IT DOESN'T MATTER CONCEPTUALLY WHAT THAT DATA IS IT COULD BE A MEETING IT COULD BE TWO UTTERANCES IT COULD BE A MEETING PLUS HALF A MEETING FROM SOMEWHERE ELSE I DON'T THINK IT'S VERY DIFFICULT THOUGH I MEAN WHAT YOU DO IS YOU JUST BUILD AN X. M. L. FILE AND IF YOU WANT IT TO GET DOWN TO THE UTTERANCES YOU'D GO TO THE LEAVES AND THEN IF YOU WANTED THE NEXT LEVEL UP
<|0.00|> It doesn't matter conceptually what that data is.<|3.76|><|3.76|> It could be a meeting.<|4.80|><|4.80|> It could be two utterances.<|6.76|><|6.76|> It could be a meeting plus half a meeting from somewhere else.<|9.64|><|9.64|> I think it's very difficult, though.<|11.04|><|11.04|> I mean, what you do is you just build an XML file.<|14.20|><|14.20|> And if you wanted to get down to the utterances,<|16.62|><|16.62|> you'd go to the leaves.<|18.36|><|18.36|> And then if you wanted the next level up,<|21.42|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:51872430:900204
28.129999
WELL LIKE SAY YOU HAD UM LIKE SAY FOR A MEETING RIGHT YOU'VE GOT LIKE UH SAY A HIERARCHY THAT LOOKS QUITE BIG LIKE THIS AND LIKE THE UTTERANCES COME OFF OF HERE MAYBE THEN WHEN WHATEVER YOUR ALGORITHM IS DOING AS LONG AS WHEN YOU'RE WORKING WITH UTTERANCES YOU GO FOR ALL THE LEAVES LIKE THEN IF YOU NEED SOMETHING NEXT UP SO LIKE A TOPIC SEGMENT YOU'D GO TO HERE BUT IF YOU WERE LOOKING AT SAY THIS ONE SO ONLY WENT LIKE THIS
<|0.00|> Well, like, say you had, like, say for a meeting, right, you've got, like, say a hierarchy<|8.02|><|8.02|> that looks quite big, like this, and like the utterances come off of here, maybe.<|12.84|><|12.84|> When whatever your algorithm is doing, as long as when you're working with utterances,<|16.66|><|16.66|> you go for all the leaves, like, then if you need something next up, so like a topic segment,<|23.02|><|23.02|> you'd go to here, but if you were looking at, say, this one, so it only went like this,<|28.14|>
well like say you had like say for a meeting right you have got like say a hierarchy that looks quite big like this and like the utterances come off of here maybe then when whatever your algorithm is doing as long as when you are working with utterances you go for all the leaves like then if you need something next up so like a topic segment you would go to here but if you were looking at say this one so only went like this
well like say you had like say for a meeting right you have got like say a hierarchy that looks quite big like this and like the utterances come off of here maybe when whatever your algorithm is doing as long as when you are working with utterances you go for all the leaves like then if you need something next up so like a topic segment you would go to here but if you were looking at say this one so it only went like this
2.298851
YOU'D GO TO THE PARENTS OF THOSE AND LIKE JUST GO FROM LIKE THE LEAVES INWARDS TOWARDS THE BRANCH TO BUILD UP THINGS LIKE UM YOU KNOW WHEN YOU CLICK ON A SEGMENT IT'S GONNA HAVE LIKE WORDS OR WHATEVER THAT ARE IMPORTANT AS LONG AS LIKE THE ALGORITHMS ARE DESIGNED UM WITH IT IN MIND I DON'T THINK IT'S A VERY BIG PROBLEM
<|0.00|> you'd go to the parents of those and just go from the leaves<|6.42|><|6.42|> inwards towards the branch to build up things like when you<|10.50|><|10.50|> click on a segment, it's going to have words or whatever<|14.52|><|14.52|> that are important.<|15.66|><|15.66|> As long as the algorithms are designed with it in mind,<|20.20|><|20.20|> I don't think it's a very big problem.<|21.68|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:59910290:826924
25.84
RIGHT SO YOU IT'S SAME YOU'D START WITH THE LEAVES AND YOU GO OH I WANT A TOPIC SEGMENT SO I GO ONE LAYER UP SEE AND THEN IF YOU'RE WORKING WITH JUST A TOPIC SEGMENT IN THERE IT'S THE ONLY THING YOU HAVE TO WORRY ABOUT AND LIKE EACH TIME YOU WANT A HIGHER LEVEL YOU JUST NEED TO GO UP THE TREE AND AS LONG AS YOUR ALGORITHM RESPECTS THAT THEN WE CAN JUST PROCESS ANY ARBITRARY X. M. L. FILE WITH WHATEVER HIERARCHICAL STRUCTURE WE WANT A MEETING SAY AND THAT WOULD BE A TOPIC SEGMENT
<|0.00|> Right, so it's the same.<|1.20|><|1.38|> You'd start with the leaves and you'd go,<|3.00|><|3.12|> oh, I want a topic segment, so I go one layer up, see?<|5.52|><|5.70|> And then if you're working with just a topic segment in there,<|9.14|><|9.34|> it's the only thing you have to worry about.<|11.08|><|11.52|> And, like, each time you want a higher level,<|13.96|><|14.04|> you just need to go up the tree.<|15.22|><|15.68|> As long as your algorithm respects that,<|17.48|><|18.16|> then we can just process any arbitrary XML file<|21.24|><|21.24|> with whatever hierarchical structure we want,<|23.30|><|23.30|> a meeting, say, and that would be a topic segment.<|25.84|>
right so you it is same you would start with the leaves and you go 0 i want a topic segment so i go one layer up see and then if you are working with just a topic segment in there it is the only thing you have to worry about and like each time you want a higher level you just need to go up the tree and as long as your algorithm respects that then we can just process any arbitrary x m l file with whatever hierarchical structure we want a meeting say and that would be a topic segment
right so it is the same you would start with the leaves and you would go 0 i want a topic segment so i go one layer up see and then if you are working with just a topic segment in there it is the only thing you have to worry about and like each time you want a higher level you just need to go up the tree as long as your algorithm respects that then we can just process any arbitrary xml file with whatever hierarchical structure we want a meeting say and that would be a topic segment
6.796116
WELL LIKE SAY YOU HAD UM LIKE SAY FOR A MEETING RIGHT YOU'VE GOT LIKE UH SAY A HIERARCHY THAT LOOKS QUITE BIG LIKE THIS AND LIKE THE UTTERANCES COME OFF OF HERE MAYBE THEN WHEN WHATEVER YOUR ALGORITHM IS DOING AS LONG AS WHEN YOU'RE WORKING WITH UTTERANCES YOU GO FOR ALL THE LEAVES LIKE THEN IF YOU NEED SOMETHING NEXT UP SO LIKE A TOPIC SEGMENT YOU'D GO TO HERE BUT IF YOU WERE LOOKING AT SAY THIS ONE SO ONLY WENT LIKE THIS
<|0.00|> Well, like, say you had, like, say for a meeting, right, you've got, like, say a hierarchy<|8.02|><|8.02|> that looks quite big, like this, and like the utterances come off of here, maybe.<|12.84|><|12.84|> When whatever your algorithm is doing, as long as when you're working with utterances,<|16.66|><|16.66|> you go for all the leaves, like, then if you need something next up, so like a topic segment,<|23.02|><|23.02|> you'd go to here, but if you were looking at, say, this one, so it only went like this,<|28.14|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:31480600:880042
27.499937
SO I THINK AS LONG AS YOU BUILD AN ALGORITHM THAT RESPECTS WHATEVER STRUCTURE'S IN THE FILE RATHER THAN IMPOSING ITS OWN STRUCTURE WELL NO IT DOESN'T HAVE TO BE BUT I MEAN IT COULD BE AS MANY NODES AS YOU WANT LIKE THIS ONE COULD BE DEEPER MAYBE SAY SO THEN YOU'D START WITH ALL YOUR UTTERANCES HERE AND WHEN YOU GO UP TO GET TOPIC SEGMENTS YOU GO TO HERE HERE HERE HERE HERE HERE HERE THAT MIGHT BE A BIT CONFUSING THOUGH 'CAUSE YOU HAVE THINGS ON DIFFERENT LEVELS WELL WEDNESDAY YEAH YEAH
<|0.00|> So I think as long as you build an algorithm that respects whatever structure's in the file,<|5.22|><|5.86|> rather than imposing its own structure.<|7.92|><|8.06|> Well, no, it doesn't have to be.<|9.16|><|9.40|> But, I mean, it could be as many nodes as you want.<|11.22|><|11.82|> Like this one could be deeper, maybe, say.<|13.62|><|13.92|> So then you'd start with all your utterances here.<|16.22|><|16.70|> And when you go up to get topic segments, you go to here, here, here, here, here, here, here.<|21.62|><|21.70|> That might be a bit confusing, though, because you have things on different levels.<|24.58|><|24.82|> Well, Wednesday.<|25.70|><|26.70|> Yeah, yeah.<|27.50|>
so i think as long as you build an algorithm that respects whatever structure is in the file rather than imposing its own structure well no it does not have to be but i mean it could be as many nodes as you want like this one could be deeper maybe say so then you would start with all your utterances here and when you go up to get topic segments you go to here here here here here here here that might be a bit confusing though cause you have things on different levels well wednesday yeah yeah
so i think as long as you build an algorithm that respects whatever structure is in the file rather than imposing its own structure well no it does not have to be but i mean it could be as many nodes as you want like this one could be deeper maybe say so then you would start with all your utterances here and when you go up to get topic segments you go to here here here here here here here that might be a bit confusing though because you have things on different levels well wednesday yeah yeah
1.010101
RIGHT SO YOU IT'S SAME YOU'D START WITH THE LEAVES AND YOU GO OH I WANT A TOPIC SEGMENT SO I GO ONE LAYER UP SEE AND THEN IF YOU'RE WORKING WITH JUST A TOPIC SEGMENT IN THERE IT'S THE ONLY THING YOU HAVE TO WORRY ABOUT AND LIKE EACH TIME YOU WANT A HIGHER LEVEL YOU JUST NEED TO GO UP THE TREE AND AS LONG AS YOUR ALGORITHM RESPECTS THAT THEN WE CAN JUST PROCESS ANY ARBITRARY X. M. L. FILE WITH WHATEVER HIERARCHICAL STRUCTURE WE WANT A MEETING SAY AND THAT WOULD BE A TOPIC SEGMENT
<|0.00|> Right, so it's the same.<|1.20|><|1.38|> You'd start with the leaves and you'd go,<|3.00|><|3.12|> oh, I want a topic segment, so I go one layer up, see?<|5.52|><|5.70|> And then if you're working with just a topic segment in there,<|9.14|><|9.34|> it's the only thing you have to worry about.<|11.08|><|11.52|> And, like, each time you want a higher level,<|13.96|><|14.04|> you just need to go up the tree.<|15.22|><|15.68|> As long as your algorithm respects that,<|17.48|><|18.16|> then we can just process any arbitrary XML file<|21.24|><|21.24|> with whatever hierarchical structure we want,<|23.30|><|23.30|> a meeting, say, and that would be a topic segment.<|25.84|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:55547142:945002
29.529938
SO WE'LL SEE IF WE CAN GET LIKE A MINI BROWSER JUST DISPLAYS TWO THINGS SYNCHED TOGETHER OF SOME KIND YEAH YEAH IT'D BE USEFUL I DON'T KNOW WHO YOU SEE ABOUT THAT THOUGH I D HAVE NO IDEA I'VE PROBABLY GOT A REASONABLE AMOUNT BECAUSE UM EVERYTHING ON MY DICE ACCOUNT CAN ACTUALLY BE DELETED 'CAUSE I STORE IT ALL AT HOME AS WELL IS THAT GUARANTEED TO STAY THE MAYBE YOU SHOULD SEND A SUPPORT FORM JUST SAY WE WANT SOME WEB SPACE LISTEN TO YEAH 'CAUSE THAT'D BE REALLY USEFUL IS IF WE HAD A BIG DIRECTORY
<|0.00|> So we'll see if we can get like a mini browser that just displays two things synced together<|5.68|><|5.68|> of some kind.<|6.68|><|6.68|> Yeah.<|7.68|><|7.68|> It'd be useful.<|8.68|><|8.68|> I don't know who you see about that.<|10.68|><|10.68|> I have no idea.<|11.68|><|11.68|> I've probably got a reasonable amount because everything on my Dice account can actually<|16.58|><|16.58|> be deleted because I store it all at home as well.<|18.96|><|18.96|> Is that guaranteed to stay there?<|20.32|><|20.32|> Maybe we should send a support form to say we want some web space listened to.<|24.62|><|24.62|> Yeah.<|25.62|><|25.62|> Because that would be really useful is if we had a big directory.<|29.54|>
so we will see if we can get like a mini browser just displays 2 things synched together of some kind yeah yeah it would be useful i do not know who you see about that though i d have no idea i have probably got a reasonable amount because everything on my dice account can actually be deleted cause i store it all at home as well is that guaranteed to stay the maybe you should send a support form just say we want some web space listen to yeah cause that would be really useful is if we had a big directory
so we will see if we can get like a mini browser that just displays 2 things synced together of some kind yeah it would be useful i do not know who you see about that i have no idea i have probably got a reasonable amount because everything on my dice account can actually be deleted because i store it all at home as well is that guaranteed to stay there maybe we should send a support form to say we want some web space listened to yeah because that would be really useful is if we had a big directory
10.576923
SO I THINK AS LONG AS YOU BUILD AN ALGORITHM THAT RESPECTS WHATEVER STRUCTURE'S IN THE FILE RATHER THAN IMPOSING ITS OWN STRUCTURE WELL NO IT DOESN'T HAVE TO BE BUT I MEAN IT COULD BE AS MANY NODES AS YOU WANT LIKE THIS ONE COULD BE DEEPER MAYBE SAY SO THEN YOU'D START WITH ALL YOUR UTTERANCES HERE AND WHEN YOU GO UP TO GET TOPIC SEGMENTS YOU GO TO HERE HERE HERE HERE HERE HERE HERE THAT MIGHT BE A BIT CONFUSING THOUGH 'CAUSE YOU HAVE THINGS ON DIFFERENT LEVELS WELL WEDNESDAY YEAH YEAH
<|0.00|> So I think as long as you build an algorithm that respects whatever structure's in the file,<|5.22|><|5.86|> rather than imposing its own structure.<|7.92|><|8.06|> Well, no, it doesn't have to be.<|9.16|><|9.40|> But, I mean, it could be as many nodes as you want.<|11.22|><|11.82|> Like this one could be deeper, maybe, say.<|13.62|><|13.92|> So then you'd start with all your utterances here.<|16.22|><|16.70|> And when you go up to get topic segments, you go to here, here, here, here, here, here, here.<|21.62|><|21.70|> That might be a bit confusing, though, because you have things on different levels.<|24.58|><|24.82|> Well, Wednesday.<|25.70|><|26.70|> Yeah, yeah.<|27.50|>
/root/.cache/huggingface/hub/datasets--bofenghuang--stt-pseudo-labeled-whisper-large-v3-multilingual/snapshots/39445707e3a481be118d5e1935c94cdcadffba29//distil-whisper/ami-ihm/ihm/train_concatenated/EN2001a.zip:74615656:821164
25.66
ESPECIALLY FOR TRANSFERRING STUFF HAVING SAID THAT ARE WE ALLOWED TO TAKE A COPY OF THE ICSI CORPUS SOMETHING WE SHOULD PROBABLY ASK BEFORE WE DO IT OKAY OKAY NO ME NEITHER MIGHT BE FUNNY TO SEE WHAT IS SUMMARISED THE WHOLE CORPUS AS ANYWAY I THINK IT'D BE VERY USEFUL BUT WE CAN JUST CHANGE THE CODE IS THAT IT THAT'S QUITE GOOD YEAH
<|0.00|> especially for transferring stuff.<|2.16|><|2.16|> Having said that, are we allowed to take a copy of the XC corpus?<|5.04|><|5.92|> Something we should probably ask before we do it.<|7.76|><|7.76|> Okay.<|8.26|><|8.26|> Okay.<|8.72|><|8.72|> No, me neither.<|9.56|><|13.56|> Might be funny to see what is summarised the whole corpus has anyway.<|16.86|><|21.02|> I think it'd be very useful.<|22.78|><|22.78|> We can just change the code.<|23.90|><|23.90|> Is that it?<|24.60|><|24.60|> That's quite good.<|25.28|><|25.28|> Yeah.<|25.66|>
especially for transferring stuff having said that are we allowed to take a copy of the icsi corpus something we should probably ask before we do it okay okay no me neither might be funny to see what is summarized the whole corpus as anyway i think it would be very useful but we can just change the code is that it that is quite good yeah
especially for transferring stuff having said that are we allowed to take a copy of the xc corpus something we should probably ask before we do it okay okay no me neither might be funny to see what is summarized the whole corpus has anyway i think it would be very useful we can just change the code is that it that is quite good yeah
4.477612
SO WE'LL SEE IF WE CAN GET LIKE A MINI BROWSER JUST DISPLAYS TWO THINGS SYNCHED TOGETHER OF SOME KIND YEAH YEAH IT'D BE USEFUL I DON'T KNOW WHO YOU SEE ABOUT THAT THOUGH I D HAVE NO IDEA I'VE PROBABLY GOT A REASONABLE AMOUNT BECAUSE UM EVERYTHING ON MY DICE ACCOUNT CAN ACTUALLY BE DELETED 'CAUSE I STORE IT ALL AT HOME AS WELL IS THAT GUARANTEED TO STAY THE MAYBE YOU SHOULD SEND A SUPPORT FORM JUST SAY WE WANT SOME WEB SPACE LISTEN TO YEAH 'CAUSE THAT'D BE REALLY USEFUL IS IF WE HAD A BIG DIRECTORY
<|0.00|> So we'll see if we can get like a mini browser that just displays two things synced together<|5.68|><|5.68|> of some kind.<|6.68|><|6.68|> Yeah.<|7.68|><|7.68|> It'd be useful.<|8.68|><|8.68|> I don't know who you see about that.<|10.68|><|10.68|> I have no idea.<|11.68|><|11.68|> I've probably got a reasonable amount because everything on my Dice account can actually<|16.58|><|16.58|> be deleted because I store it all at home as well.<|18.96|><|18.96|> Is that guaranteed to stay there?<|20.32|><|20.32|> Maybe we should send a support form to say we want some web space listened to.<|24.62|><|24.62|> Yeah.<|25.62|><|25.62|> Because that would be really useful is if we had a big directory.<|29.54|>
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
-